Discussion:
[PATCH v2 00/12] crypto: caam - Add RTA descriptor creation library
Horia Geanta
2014-08-14 12:54:22 UTC
Permalink
Hi,

This patch set adds Run Time Assembler (RTA) SEC descriptor library.
RTA is a replacement for incumbent "inline append".

The library is intended to be a single code base for SEC descriptors creation
for all Freescale products. This comes with a series of advantages, such as
library being maintained / kept up-to-date with latest platforms, i.e. SEC
functionalities (for e.g. SEC incarnations present in Layerscape LS1 and LS2).

RTA detects options in SEC descriptors that are not supported
by a SEC HW revision ("Era") and reports this back.
Say a descriptor uses Sequence Out Pointer (SOP) option for the SEQINPTR
command, which is supported starting from SEC Era 5. If the descriptor would
be built on a P4080R3 platform (which has SEC Era 4), RTA would report
"SEQ IN PTR: Flag(s) not supported by SEC Era 4".
This is extremely useful and saves a lot of time wasted on debugging.
SEC HW detects only *some* of these problems, leaving user wonder what causes
a "DECO Watchdog Timeout". And when it prints something more useful, sometimes
it does not point to the exact opcode.

Below is a summary of the patch set.

Patches 01-04 are fixes, clean-ups.

Patches 05-07 add the RTA library in 3 steps, to overcome
patch size limitations.
Patch 07 replaces desc.h with a new version from RTA. However, at this stage,
RTA is still not being used.

Patch 08 rewrites "inline append" descriptors using RTA.
Descriptors (hex dumps) were tested to be bit-exact,
with a few exceptions (see commit message).

Patch 09 removes "inline append" files.

Patch 10 refactors code that generates the descriptors,
in order to:
-make code more comprehensible and maintainable
-prepare for changes in patch 11

Patch 11 moves some of the descriptors that could be used
in other places from caamalg.c into RTA library files.

Patch 12 adds support for generating kernel-doc for RTA.
It depends on upstream (torvalds/linux.git) commit
cbb4d3e6510b99522719c5ef0cd0482886a324c0
("scripts/kernel-doc: handle object-like macros")

Thanks,
Horia

Horia Geanta (12):
crypto: caam - completely remove error propagation handling
crypto: caam - desc.h fixes
crypto: caam - code cleanup
crypto: caam - move sec4_sg_entry to sg_sw_sec4.h
crypto: caam - add Run Time Library (RTA) - part 1
crypto: caam - add Run Time Library (RTA) - part 2
crypto: caam - add Run Time Library (RTA) - part 3
crypto: caam - use RTA instead of inline append
crypto: caam - completely remove inline append
crypto: caam - refactor descriptor creation
crypto: caam - move caamalg shared descs in RTA library
crypto: caam - add Run Time Library (RTA) docbook

Documentation/DocBook/Makefile | 3 +-
Documentation/DocBook/rta-api.tmpl | 271 ++++
Documentation/DocBook/rta/.gitignore | 1 +
Documentation/DocBook/rta/Makefile | 5 +
Documentation/DocBook/rta/rta_arch.svg | 381 ++++++
drivers/crypto/caam/Makefile | 4 +-
drivers/crypto/caam/caamalg.c | 799 ++++--------
drivers/crypto/caam/caamhash.c | 536 ++++----
drivers/crypto/caam/caamrng.c | 48 +-
drivers/crypto/caam/compat.h | 1 +
drivers/crypto/caam/ctrl.c | 91 +-
drivers/crypto/caam/ctrl.h | 2 +-
drivers/crypto/caam/desc_constr.h | 388 ------
drivers/crypto/caam/error.c | 7 +-
drivers/crypto/caam/{ => flib}/desc.h | 1335 +++++++++++++++++---
drivers/crypto/caam/flib/desc/algo.h | 88 ++
drivers/crypto/caam/flib/desc/common.h | 151 +++
drivers/crypto/caam/flib/desc/ipsec.h | 550 ++++++++
drivers/crypto/caam/flib/desc/jobdesc.h | 57 +
drivers/crypto/caam/flib/rta.h | 980 ++++++++++++++
drivers/crypto/caam/flib/rta/fifo_load_store_cmd.h | 303 +++++
drivers/crypto/caam/flib/rta/header_cmd.h | 209 +++
drivers/crypto/caam/flib/rta/jump_cmd.h | 168 +++
drivers/crypto/caam/flib/rta/key_cmd.h | 183 +++
drivers/crypto/caam/flib/rta/load_cmd.h | 297 +++++
drivers/crypto/caam/flib/rta/math_cmd.h | 362 ++++++
drivers/crypto/caam/flib/rta/move_cmd.h | 401 ++++++
drivers/crypto/caam/flib/rta/nfifo_cmd.h | 157 +++
drivers/crypto/caam/flib/rta/operation_cmd.h | 545 ++++++++
drivers/crypto/caam/flib/rta/protocol_cmd.h | 595 +++++++++
drivers/crypto/caam/flib/rta/sec_run_time_asm.h | 672 ++++++++++
drivers/crypto/caam/flib/rta/seq_in_out_ptr_cmd.h | 168 +++
drivers/crypto/caam/flib/rta/signature_cmd.h | 36 +
drivers/crypto/caam/flib/rta/store_cmd.h | 145 +++
drivers/crypto/caam/jr.c | 6 +-
drivers/crypto/caam/key_gen.c | 35 +-
drivers/crypto/caam/key_gen.h | 5 +-
drivers/crypto/caam/pdb.h | 402 ------
drivers/crypto/caam/sg_sw_sec4.h | 12 +-
39 files changed, 8438 insertions(+), 1961 deletions(-)
create mode 100644 Documentation/DocBook/rta-api.tmpl
create mode 100644 Documentation/DocBook/rta/.gitignore
create mode 100644 Documentation/DocBook/rta/Makefile
create mode 100644 Documentation/DocBook/rta/rta_arch.svg
delete mode 100644 drivers/crypto/caam/desc_constr.h
rename drivers/crypto/caam/{ => flib}/desc.h (54%)
create mode 100644 drivers/crypto/caam/flib/desc/algo.h
create mode 100644 drivers/crypto/caam/flib/desc/common.h
create mode 100644 drivers/crypto/caam/flib/desc/ipsec.h
create mode 100644 drivers/crypto/caam/flib/desc/jobdesc.h
create mode 100644 drivers/crypto/caam/flib/rta.h
create mode 100644 drivers/crypto/caam/flib/rta/fifo_load_store_cmd.h
create mode 100644 drivers/crypto/caam/flib/rta/header_cmd.h
create mode 100644 drivers/crypto/caam/flib/rta/jump_cmd.h
create mode 100644 drivers/crypto/caam/flib/rta/key_cmd.h
create mode 100644 drivers/crypto/caam/flib/rta/load_cmd.h
create mode 100644 drivers/crypto/caam/flib/rta/math_cmd.h
create mode 100644 drivers/crypto/caam/flib/rta/move_cmd.h
create mode 100644 drivers/crypto/caam/flib/rta/nfifo_cmd.h
create mode 100644 drivers/crypto/caam/flib/rta/operation_cmd.h
create mode 100644 drivers/crypto/caam/flib/rta/protocol_cmd.h
create mode 100644 drivers/crypto/caam/flib/rta/sec_run_time_asm.h
create mode 100644 drivers/crypto/caam/flib/rta/seq_in_out_ptr_cmd.h
create mode 100644 drivers/crypto/caam/flib/rta/signature_cmd.h
create mode 100644 drivers/crypto/caam/flib/rta/store_cmd.h
delete mode 100644 drivers/crypto/caam/pdb.h
--
1.8.3.1
Horia Geanta
2014-08-14 12:54:25 UTC
Permalink
1. Fix the following sparse/smatch warnings:
drivers/crypto/caam/ctrl.c:365:5: warning: symbol 'caam_get_era' was not declared. Should it be static?
drivers/crypto/caam/ctrl.c:372 caam_get_era() info: loop could be replaced with if statement.
drivers/crypto/caam/ctrl.c:368 caam_get_era() info: ignoring unreachable code.
drivers/crypto/caam/jr.c:68:5: warning: symbol 'caam_jr_shutdown' was not declared. Should it be static?
drivers/crypto/caam/jr.c:475:23: warning: incorrect type in assignment (different address spaces)
drivers/crypto/caam/jr.c:475:23: expected struct caam_job_ring [noderef] <asn:2>*rregs
drivers/crypto/caam/jr.c:475:23: got struct caam_job_ring *<noident>
drivers/crypto/caam/caamrng.c:343 caam_rng_init() error: no modifiers for allocation.

2. remove unreachable code in report_ccb_status
ERRID is a 4-bit field.
Since err_id values are in [0..15] and err_id_list array size is 16,
the condition "err_id < ARRAY_SIZE(err_id_list)" is always true.

3. remove unused / unneeded variables

4. remove precision loss warning - offset field in HW s/g table

5. replace offsetof with container_of

Signed-off-by: Horia Geanta <***@freescale.com>
---
drivers/crypto/caam/caamalg.c | 59 ++++++++++++++++++----------------------
drivers/crypto/caam/caamhash.c | 12 +++-----
drivers/crypto/caam/caamrng.c | 2 +-
drivers/crypto/caam/ctrl.c | 8 ++++--
drivers/crypto/caam/error.c | 5 ++--
drivers/crypto/caam/jr.c | 4 +--
drivers/crypto/caam/sg_sw_sec4.h | 2 +-
7 files changed, 42 insertions(+), 50 deletions(-)

diff --git a/drivers/crypto/caam/caamalg.c b/drivers/crypto/caam/caamalg.c
index a80ea853701d..c3a845856cd0 100644
--- a/drivers/crypto/caam/caamalg.c
+++ b/drivers/crypto/caam/caamalg.c
@@ -925,8 +925,7 @@ static void aead_encrypt_done(struct device *jrdev, u32 *desc, u32 err,
dev_err(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err);
#endif

- edesc = (struct aead_edesc *)((char *)desc -
- offsetof(struct aead_edesc, hw_desc));
+ edesc = container_of(desc, struct aead_edesc, hw_desc[0]);

if (err)
caam_jr_strstatus(jrdev, err);
@@ -964,8 +963,7 @@ static void aead_decrypt_done(struct device *jrdev, u32 *desc, u32 err,
dev_err(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err);
#endif

- edesc = (struct aead_edesc *)((char *)desc -
- offsetof(struct aead_edesc, hw_desc));
+ edesc = container_of(desc, struct aead_edesc, hw_desc[0]);

#ifdef DEBUG
print_hex_dump(KERN_ERR, "dstiv @"__stringify(__LINE__)": ",
@@ -1019,8 +1017,7 @@ static void ablkcipher_encrypt_done(struct device *jrdev, u32 *desc, u32 err,
dev_err(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err);
#endif

- edesc = (struct ablkcipher_edesc *)((char *)desc -
- offsetof(struct ablkcipher_edesc, hw_desc));
+ edesc = container_of(desc, struct ablkcipher_edesc, hw_desc[0]);

if (err)
caam_jr_strstatus(jrdev, err);
@@ -1052,8 +1049,7 @@ static void ablkcipher_decrypt_done(struct device *jrdev, u32 *desc, u32 err,
dev_err(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err);
#endif

- edesc = (struct ablkcipher_edesc *)((char *)desc -
- offsetof(struct ablkcipher_edesc, hw_desc));
+ edesc = container_of(desc, struct ablkcipher_edesc, hw_desc[0]);
if (err)
caam_jr_strstatus(jrdev, err);

@@ -1286,7 +1282,6 @@ static struct aead_edesc *aead_edesc_alloc(struct aead_request *req,
int assoc_nents, src_nents, dst_nents = 0;
struct aead_edesc *edesc;
dma_addr_t iv_dma = 0;
- int sgc;
bool all_contig = true;
bool assoc_chained = false, src_chained = false, dst_chained = false;
int ivsize = crypto_aead_ivsize(aead);
@@ -1308,16 +1303,16 @@ static struct aead_edesc *aead_edesc_alloc(struct aead_request *req,
&src_chained);
}

- sgc = dma_map_sg_chained(jrdev, req->assoc, assoc_nents ? : 1,
- DMA_TO_DEVICE, assoc_chained);
+ dma_map_sg_chained(jrdev, req->assoc, assoc_nents ? : 1,
+ DMA_TO_DEVICE, assoc_chained);
if (likely(req->src == req->dst)) {
- sgc = dma_map_sg_chained(jrdev, req->src, src_nents ? : 1,
- DMA_BIDIRECTIONAL, src_chained);
+ dma_map_sg_chained(jrdev, req->src, src_nents ? : 1,
+ DMA_BIDIRECTIONAL, src_chained);
} else {
- sgc = dma_map_sg_chained(jrdev, req->src, src_nents ? : 1,
- DMA_TO_DEVICE, src_chained);
- sgc = dma_map_sg_chained(jrdev, req->dst, dst_nents ? : 1,
- DMA_FROM_DEVICE, dst_chained);
+ dma_map_sg_chained(jrdev, req->src, src_nents ? : 1,
+ DMA_TO_DEVICE, src_chained);
+ dma_map_sg_chained(jrdev, req->dst, dst_nents ? : 1,
+ DMA_FROM_DEVICE, dst_chained);
}

iv_dma = dma_map_single(jrdev, req->iv, ivsize, DMA_TO_DEVICE);
@@ -1485,7 +1480,6 @@ static struct aead_edesc *aead_giv_edesc_alloc(struct aead_givcrypt_request
int assoc_nents, src_nents, dst_nents = 0;
struct aead_edesc *edesc;
dma_addr_t iv_dma = 0;
- int sgc;
u32 contig = GIV_SRC_CONTIG | GIV_DST_CONTIG;
int ivsize = crypto_aead_ivsize(aead);
bool assoc_chained = false, src_chained = false, dst_chained = false;
@@ -1498,16 +1492,16 @@ static struct aead_edesc *aead_giv_edesc_alloc(struct aead_givcrypt_request
dst_nents = sg_count(req->dst, req->cryptlen + ctx->authsize,
&dst_chained);

- sgc = dma_map_sg_chained(jrdev, req->assoc, assoc_nents ? : 1,
- DMA_TO_DEVICE, assoc_chained);
+ dma_map_sg_chained(jrdev, req->assoc, assoc_nents ? : 1,
+ DMA_TO_DEVICE, assoc_chained);
if (likely(req->src == req->dst)) {
- sgc = dma_map_sg_chained(jrdev, req->src, src_nents ? : 1,
- DMA_BIDIRECTIONAL, src_chained);
+ dma_map_sg_chained(jrdev, req->src, src_nents ? : 1,
+ DMA_BIDIRECTIONAL, src_chained);
} else {
- sgc = dma_map_sg_chained(jrdev, req->src, src_nents ? : 1,
- DMA_TO_DEVICE, src_chained);
- sgc = dma_map_sg_chained(jrdev, req->dst, dst_nents ? : 1,
- DMA_FROM_DEVICE, dst_chained);
+ dma_map_sg_chained(jrdev, req->src, src_nents ? : 1,
+ DMA_TO_DEVICE, src_chained);
+ dma_map_sg_chained(jrdev, req->dst, dst_nents ? : 1,
+ DMA_FROM_DEVICE, dst_chained);
}

iv_dma = dma_map_single(jrdev, greq->giv, ivsize, DMA_TO_DEVICE);
@@ -1655,7 +1649,6 @@ static struct ablkcipher_edesc *ablkcipher_edesc_alloc(struct ablkcipher_request
struct ablkcipher_edesc *edesc;
dma_addr_t iv_dma = 0;
bool iv_contig = false;
- int sgc;
int ivsize = crypto_ablkcipher_ivsize(ablkcipher);
bool src_chained = false, dst_chained = false;
int sec4_sg_index;
@@ -1666,13 +1659,13 @@ static struct ablkcipher_edesc *ablkcipher_edesc_alloc(struct ablkcipher_request
dst_nents = sg_count(req->dst, req->nbytes, &dst_chained);

if (likely(req->src == req->dst)) {
- sgc = dma_map_sg_chained(jrdev, req->src, src_nents ? : 1,
- DMA_BIDIRECTIONAL, src_chained);
+ dma_map_sg_chained(jrdev, req->src, src_nents ? : 1,
+ DMA_BIDIRECTIONAL, src_chained);
} else {
- sgc = dma_map_sg_chained(jrdev, req->src, src_nents ? : 1,
- DMA_TO_DEVICE, src_chained);
- sgc = dma_map_sg_chained(jrdev, req->dst, dst_nents ? : 1,
- DMA_FROM_DEVICE, dst_chained);
+ dma_map_sg_chained(jrdev, req->src, src_nents ? : 1,
+ DMA_TO_DEVICE, src_chained);
+ dma_map_sg_chained(jrdev, req->dst, dst_nents ? : 1,
+ DMA_FROM_DEVICE, dst_chained);
}

iv_dma = dma_map_single(jrdev, req->info, ivsize, DMA_TO_DEVICE);
diff --git a/drivers/crypto/caam/caamhash.c b/drivers/crypto/caam/caamhash.c
index 56ec534337b3..386efb9e192c 100644
--- a/drivers/crypto/caam/caamhash.c
+++ b/drivers/crypto/caam/caamhash.c
@@ -640,8 +640,7 @@ static void ahash_done(struct device *jrdev, u32 *desc, u32 err,
dev_err(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err);
#endif

- edesc = (struct ahash_edesc *)((char *)desc -
- offsetof(struct ahash_edesc, hw_desc));
+ edesc = container_of(desc, struct ahash_edesc, hw_desc[0]);
if (err)
caam_jr_strstatus(jrdev, err);

@@ -675,8 +674,7 @@ static void ahash_done_bi(struct device *jrdev, u32 *desc, u32 err,
dev_err(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err);
#endif

- edesc = (struct ahash_edesc *)((char *)desc -
- offsetof(struct ahash_edesc, hw_desc));
+ edesc = container_of(desc, struct ahash_edesc, hw_desc[0]);
if (err)
caam_jr_strstatus(jrdev, err);

@@ -710,8 +708,7 @@ static void ahash_done_ctx_src(struct device *jrdev, u32 *desc, u32 err,
dev_err(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err);
#endif

- edesc = (struct ahash_edesc *)((char *)desc -
- offsetof(struct ahash_edesc, hw_desc));
+ edesc = container_of(desc, struct ahash_edesc, hw_desc[0]);
if (err)
caam_jr_strstatus(jrdev, err);

@@ -745,8 +742,7 @@ static void ahash_done_ctx_dst(struct device *jrdev, u32 *desc, u32 err,
dev_err(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err);
#endif

- edesc = (struct ahash_edesc *)((char *)desc -
- offsetof(struct ahash_edesc, hw_desc));
+ edesc = container_of(desc, struct ahash_edesc, hw_desc[0]);
if (err)
caam_jr_strstatus(jrdev, err);

diff --git a/drivers/crypto/caam/caamrng.c b/drivers/crypto/caam/caamrng.c
index 8b9df8deda67..5b288082e6ac 100644
--- a/drivers/crypto/caam/caamrng.c
+++ b/drivers/crypto/caam/caamrng.c
@@ -340,7 +340,7 @@ static int __init caam_rng_init(void)
pr_err("Job Ring Device allocation for transform failed\n");
return PTR_ERR(dev);
}
- rng_ctx = kmalloc(sizeof(struct caam_rng_ctx), GFP_DMA);
+ rng_ctx = kmalloc(sizeof(*rng_ctx), GFP_KERNEL | GFP_DMA);
if (!rng_ctx)
return -ENOMEM;
err = caam_init_rng(rng_ctx, dev);
diff --git a/drivers/crypto/caam/ctrl.c b/drivers/crypto/caam/ctrl.c
index 3cade79ea41e..69736b6f07ae 100644
--- a/drivers/crypto/caam/ctrl.c
+++ b/drivers/crypto/caam/ctrl.c
@@ -15,6 +15,7 @@
#include "jr.h"
#include "desc_constr.h"
#include "error.h"
+#include "ctrl.h"

/*
* Descriptor to instantiate RNG State Handle 0 in normal mode and
@@ -212,7 +213,7 @@ static int instantiate_rng(struct device *ctrldev, int state_handle_mask,
* CAAM eras), then try again.
*/
rdsta_val =
- rd_reg32(&topregs->ctrl.r4tst[0].rdsta) & RDSTA_IFMASK;
+ rd_reg32(&r4tst->rdsta) & RDSTA_IFMASK;
if (status || !(rdsta_val & (1 << sh_idx)))
ret = -EAGAIN;
if (ret)
@@ -368,10 +369,13 @@ static void kick_trng(struct platform_device *pdev, int ent_delay)
int caam_get_era(void)
{
struct device_node *caam_node;
- for_each_compatible_node(caam_node, NULL, "fsl,sec-v4.0") {
+
+ caam_node = of_find_compatible_node(NULL, NULL, "fsl,sec-v4.0");
+ if (caam_node) {
const uint32_t *prop = (uint32_t *)of_get_property(caam_node,
"fsl,sec-era",
NULL);
+ of_node_put(caam_node);
return prop ? *prop : -ENOTSUPP;
}

diff --git a/drivers/crypto/caam/error.c b/drivers/crypto/caam/error.c
index 6531054a44c8..7d6ed4722345 100644
--- a/drivers/crypto/caam/error.c
+++ b/drivers/crypto/caam/error.c
@@ -146,10 +146,9 @@ static void report_ccb_status(struct device *jrdev, const u32 status,
strlen(rng_err_id_list[err_id])) {
/* RNG-only error */
err_str = rng_err_id_list[err_id];
- } else if (err_id < ARRAY_SIZE(err_id_list))
+ } else {
err_str = err_id_list[err_id];
- else
- snprintf(err_err_code, sizeof(err_err_code), "%02x", err_id);
+ }

dev_err(jrdev, "%08x: %s: %s %d: %s%s: %s%s\n",
status, error, idx_str, idx,
diff --git a/drivers/crypto/caam/jr.c b/drivers/crypto/caam/jr.c
index 50cd1b9af2ba..ec3652d62e93 100644
--- a/drivers/crypto/caam/jr.c
+++ b/drivers/crypto/caam/jr.c
@@ -65,7 +65,7 @@ static int caam_reset_hw_jr(struct device *dev)
/*
* Shutdown JobR independent of platform property code
*/
-int caam_jr_shutdown(struct device *dev)
+static int caam_jr_shutdown(struct device *dev)
{
struct caam_drv_private_jr *jrp = dev_get_drvdata(dev);
dma_addr_t inpbusaddr, outbusaddr;
@@ -472,7 +472,7 @@ static int caam_jr_probe(struct platform_device *pdev)
return -ENOMEM;
}

- jrpriv->rregs = (struct caam_job_ring __force *)ctrl;
+ jrpriv->rregs = (struct caam_job_ring __iomem __force *)ctrl;

if (sizeof(dma_addr_t) == sizeof(u64))
if (of_device_is_compatible(nprop, "fsl,sec-v5.0-job-ring"))
diff --git a/drivers/crypto/caam/sg_sw_sec4.h b/drivers/crypto/caam/sg_sw_sec4.h
index b12ff85f4241..a6e5b94756d4 100644
--- a/drivers/crypto/caam/sg_sw_sec4.h
+++ b/drivers/crypto/caam/sg_sw_sec4.h
@@ -17,7 +17,7 @@ static inline void dma_to_sec4_sg_one(struct sec4_sg_entry *sec4_sg_ptr,
sec4_sg_ptr->len = len;
sec4_sg_ptr->reserved = 0;
sec4_sg_ptr->buf_pool_id = 0;
- sec4_sg_ptr->offset = offset;
+ sec4_sg_ptr->offset = (u16)offset;
#ifdef DEBUG
print_hex_dump(KERN_ERR, "sec4_sg_ptr@: ",
DUMP_PREFIX_ADDRESS, 16, 4, sec4_sg_ptr,
--
1.8.3.1
Horia Geanta
2014-08-14 12:54:23 UTC
Permalink
Commit 4464a7d4f53d756101291da26563f37f7fce40f3
("crypto: caam - remove error propagation handling")
removed error propagation handling only from caamalg.

Do this in all other places: caamhash, caamrng.
Update descriptors' lengths appropriately.
Note that caamrng's shared descriptor length was incorrect.

Signed-off-by: Horia Geanta <***@freescale.com>
---
drivers/crypto/caam/caamhash.c | 5 +----
drivers/crypto/caam/caamrng.c | 9 +++------
2 files changed, 4 insertions(+), 10 deletions(-)

diff --git a/drivers/crypto/caam/caamhash.c b/drivers/crypto/caam/caamhash.c
index b464d03ebf40..56ec534337b3 100644
--- a/drivers/crypto/caam/caamhash.c
+++ b/drivers/crypto/caam/caamhash.c
@@ -72,7 +72,7 @@
#define CAAM_MAX_HASH_DIGEST_SIZE SHA512_DIGEST_SIZE

/* length of descriptors text */
-#define DESC_AHASH_BASE (4 * CAAM_CMD_SZ)
+#define DESC_AHASH_BASE (3 * CAAM_CMD_SZ)
#define DESC_AHASH_UPDATE_LEN (6 * CAAM_CMD_SZ)
#define DESC_AHASH_UPDATE_FIRST_LEN (DESC_AHASH_BASE + 4 * CAAM_CMD_SZ)
#define DESC_AHASH_FINAL_LEN (DESC_AHASH_BASE + 5 * CAAM_CMD_SZ)
@@ -247,9 +247,6 @@ static inline void init_sh_desc_key_ahash(u32 *desc, struct caam_hash_ctx *ctx)

set_jump_tgt_here(desc, key_jump_cmd);
}
-
- /* Propagate errors from shared to job descriptor */
- append_cmd(desc, SET_OK_NO_PROP_ERRORS | CMD_LOAD);
}

/*
diff --git a/drivers/crypto/caam/caamrng.c b/drivers/crypto/caam/caamrng.c
index ae31e555793c..8b9df8deda67 100644
--- a/drivers/crypto/caam/caamrng.c
+++ b/drivers/crypto/caam/caamrng.c
@@ -52,7 +52,7 @@

/* length of descriptors */
#define DESC_JOB_O_LEN (CAAM_CMD_SZ * 2 + CAAM_PTR_SZ * 2)
-#define DESC_RNG_LEN (10 * CAAM_CMD_SZ)
+#define DESC_RNG_LEN (3 * CAAM_CMD_SZ)

/* Buffer, its dma address and lock */
struct buf_data {
@@ -90,8 +90,8 @@ static inline void rng_unmap_ctx(struct caam_rng_ctx *ctx)
struct device *jrdev = ctx->jrdev;

if (ctx->sh_desc_dma)
- dma_unmap_single(jrdev, ctx->sh_desc_dma, DESC_RNG_LEN,
- DMA_TO_DEVICE);
+ dma_unmap_single(jrdev, ctx->sh_desc_dma,
+ desc_bytes(ctx->sh_desc), DMA_TO_DEVICE);
rng_unmap_buf(jrdev, &ctx->bufs[0]);
rng_unmap_buf(jrdev, &ctx->bufs[1]);
}
@@ -192,9 +192,6 @@ static inline int rng_create_sh_desc(struct caam_rng_ctx *ctx)

init_sh_desc(desc, HDR_SHARE_SERIAL);

- /* Propagate errors from shared to job descriptor */
- append_cmd(desc, SET_OK_NO_PROP_ERRORS | CMD_LOAD);
-
/* Generate random bytes */
append_operation(desc, OP_ALG_ALGSEL_RNG | OP_TYPE_CLASS1_ALG);
--
1.8.3.1
Horia Geanta
2014-08-14 12:54:27 UTC
Permalink
Run Time Library (RTA) is targeted towards building CAAM descriptors,
i.e. programs using accelerator-specific instruction set.

The main reason of replacing incumbent "inline append" is
to have a single code base both for user space and kernel space.
Library also provides for greater flexibility and is more up-to-date.

A useful feature is that the library warns when options not available
in a CAAM version ("Era") are being used.

RTA addition is split in 3 parts, to overcome patch size limitations:
-part 1 (this patch) - add all commands / opcodes (except for PROTOCOL)
-part 2 - add headers defining the API
-part 3 - replace desc.h with a newer version (from within library)

Signed-off-by: Horia Geanta <***@freescale.com>
Signed-off-by: Carmen Iorga <***@freescale.com>
---
drivers/crypto/caam/flib/rta/fifo_load_store_cmd.h | 303 ++++++++++++
drivers/crypto/caam/flib/rta/header_cmd.h | 209 ++++++++
drivers/crypto/caam/flib/rta/jump_cmd.h | 168 +++++++
drivers/crypto/caam/flib/rta/key_cmd.h | 183 +++++++
drivers/crypto/caam/flib/rta/load_cmd.h | 297 +++++++++++
drivers/crypto/caam/flib/rta/math_cmd.h | 362 ++++++++++++++
drivers/crypto/caam/flib/rta/move_cmd.h | 401 +++++++++++++++
drivers/crypto/caam/flib/rta/nfifo_cmd.h | 157 ++++++
drivers/crypto/caam/flib/rta/operation_cmd.h | 545 +++++++++++++++++++++
drivers/crypto/caam/flib/rta/seq_in_out_ptr_cmd.h | 168 +++++++
drivers/crypto/caam/flib/rta/signature_cmd.h | 36 ++
drivers/crypto/caam/flib/rta/store_cmd.h | 145 ++++++
12 files changed, 2974 insertions(+)
create mode 100644 drivers/crypto/caam/flib/rta/fifo_load_store_cmd.h
create mode 100644 drivers/crypto/caam/flib/rta/header_cmd.h
create mode 100644 drivers/crypto/caam/flib/rta/jump_cmd.h
create mode 100644 drivers/crypto/caam/flib/rta/key_cmd.h
create mode 100644 drivers/crypto/caam/flib/rta/load_cmd.h
create mode 100644 drivers/crypto/caam/flib/rta/math_cmd.h
create mode 100644 drivers/crypto/caam/flib/rta/move_cmd.h
create mode 100644 drivers/crypto/caam/flib/rta/nfifo_cmd.h
create mode 100644 drivers/crypto/caam/flib/rta/operation_cmd.h
create mode 100644 drivers/crypto/caam/flib/rta/seq_in_out_ptr_cmd.h
create mode 100644 drivers/crypto/caam/flib/rta/signature_cmd.h
create mode 100644 drivers/crypto/caam/flib/rta/store_cmd.h

diff --git a/drivers/crypto/caam/flib/rta/fifo_load_store_cmd.h b/drivers/crypto/caam/flib/rta/fifo_load_store_cmd.h
new file mode 100644
index 000000000000..d1b9016125e9
--- /dev/null
+++ b/drivers/crypto/caam/flib/rta/fifo_load_store_cmd.h
@@ -0,0 +1,303 @@
+/* Copyright 2008-2013 Freescale Semiconductor, Inc. */
+
+#ifndef __RTA_FIFO_LOAD_STORE_CMD_H__
+#define __RTA_FIFO_LOAD_STORE_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+static const uint32_t fifo_load_table[][2] = {
+/*1*/ { PKA0, FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_A0 },
+ { PKA1, FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_A1 },
+ { PKA2, FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_A2 },
+ { PKA3, FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_A3 },
+ { PKB0, FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_B0 },
+ { PKB1, FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_B1 },
+ { PKB2, FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_B2 },
+ { PKB3, FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_B3 },
+ { PKA, FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_A },
+ { PKB, FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_B },
+ { PKN, FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_PK_N },
+ { SKIP, FIFOLD_CLASS_SKIP },
+ { MSG1, FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_MSG },
+ { MSG2, FIFOLD_CLASS_CLASS2 | FIFOLD_TYPE_MSG },
+ { MSGOUTSNOOP, FIFOLD_CLASS_BOTH | FIFOLD_TYPE_MSG1OUT2 },
+ { MSGINSNOOP, FIFOLD_CLASS_BOTH | FIFOLD_TYPE_MSG },
+ { IV1, FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_IV },
+ { IV2, FIFOLD_CLASS_CLASS2 | FIFOLD_TYPE_IV },
+ { AAD1, FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_AAD },
+ { ICV1, FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_ICV },
+ { ICV2, FIFOLD_CLASS_CLASS2 | FIFOLD_TYPE_ICV },
+ { BIT_DATA, FIFOLD_TYPE_BITDATA },
+/*23*/ { IFIFO, FIFOLD_CLASS_CLASS1 | FIFOLD_TYPE_NOINFOFIFO }
+};
+
+/*
+ * Allowed FIFO_LOAD input data types for each SEC Era.
+ * Values represent the number of entries from fifo_load_table[] that are
+ * supported.
+ */
+static const unsigned fifo_load_table_sz[] = {22, 22, 23, 23, 23, 23, 23, 23};
+
+static inline int rta_fifo_load(struct program *program, uint32_t src,
+ uint64_t loc, uint32_t length, uint32_t flags)
+{
+ uint32_t opcode = 0;
+ uint32_t ext_length = 0, val = 0;
+ int ret = -EINVAL;
+ bool is_seq_cmd = false;
+ unsigned start_pc = program->current_pc;
+
+ /* write command type field */
+ if (flags & SEQ) {
+ opcode = CMD_SEQ_FIFO_LOAD;
+ is_seq_cmd = true;
+ } else {
+ opcode = CMD_FIFO_LOAD;
+ }
+
+ /* Parameters checking */
+ if (is_seq_cmd) {
+ if ((flags & IMMED) || (flags & SGF)) {
+ pr_err("SEQ FIFO LOAD: Invalid command\n");
+ goto err;
+ }
+ if ((rta_sec_era <= RTA_SEC_ERA_5) && (flags & AIDF)) {
+ pr_err("SEQ FIFO LOAD: Flag(s) not supported by SEC Era %d\n",
+ USER_SEC_ERA(rta_sec_era));
+ goto err;
+ }
+ if ((flags & VLF) && ((flags & EXT) || (length >> 16))) {
+ pr_err("SEQ FIFO LOAD: Invalid usage of VLF\n");
+ goto err;
+ }
+ } else {
+ if (src == SKIP) {
+ pr_err("FIFO LOAD: Invalid src\n");
+ goto err;
+ }
+ if ((flags & AIDF) || (flags & VLF)) {
+ pr_err("FIFO LOAD: Invalid command\n");
+ goto err;
+ }
+ if ((flags & IMMED) && (flags & SGF)) {
+ pr_err("FIFO LOAD: Invalid usage of SGF and IMM\n");
+ goto err;
+ }
+ if ((flags & IMMED) && ((flags & EXT) || (length >> 16))) {
+ pr_err("FIFO LOAD: Invalid usage of EXT and IMM\n");
+ goto err;
+ }
+ }
+
+ /* write input data type field */
+ ret = __rta_map_opcode(src, fifo_load_table,
+ fifo_load_table_sz[rta_sec_era], &val);
+ if (ret < 0) {
+ pr_err("FIFO LOAD: Source value is not supported. SEC Program Line: %d\n",
+ program->current_pc);
+ goto err;
+ }
+ opcode |= val;
+
+ if (flags & CLASS1)
+ opcode |= FIFOLD_CLASS_CLASS1;
+ if (flags & CLASS2)
+ opcode |= FIFOLD_CLASS_CLASS2;
+ if (flags & BOTH)
+ opcode |= FIFOLD_CLASS_BOTH;
+
+ /* write fields: SGF|VLF, IMM, [LC1, LC2, F1] */
+ if (flags & FLUSH1)
+ opcode |= FIFOLD_TYPE_FLUSH1;
+ if (flags & LAST1)
+ opcode |= FIFOLD_TYPE_LAST1;
+ if (flags & LAST2)
+ opcode |= FIFOLD_TYPE_LAST2;
+ if (!is_seq_cmd) {
+ if (flags & SGF)
+ opcode |= FIFOLDST_SGF;
+ if (flags & IMMED)
+ opcode |= FIFOLD_IMM;
+ } else {
+ if (flags & VLF)
+ opcode |= FIFOLDST_VLF;
+ if (flags & AIDF)
+ opcode |= FIFOLD_AIDF;
+ }
+
+ /*
+ * Verify if extended length is required. In case of BITDATA, calculate
+ * number of full bytes and additional valid bits.
+ */
+ if ((flags & EXT) || (length >> 16)) {
+ opcode |= FIFOLDST_EXT;
+ if (src == BIT_DATA) {
+ ext_length = (length / 8);
+ length = (length % 8);
+ } else {
+ ext_length = length;
+ length = 0;
+ }
+ }
+ opcode |= (uint16_t) length;
+
+ __rta_out32(program, opcode);
+ program->current_instruction++;
+
+ /* write pointer or immediate data field */
+ if (flags & IMMED)
+ __rta_inline_data(program, loc, flags & __COPY_MASK, length);
+ else if (!is_seq_cmd)
+ __rta_out64(program, program->ps, loc);
+
+ /* write extended length field */
+ if (opcode & FIFOLDST_EXT)
+ __rta_out32(program, ext_length);
+
+ return (int)start_pc;
+
+ err:
+ program->first_error_pc = start_pc;
+ program->current_instruction++;
+ return ret;
+}
+
+static const uint32_t fifo_store_table[][2] = {
+/*1*/ { PKA0, FIFOST_TYPE_PKHA_A0 },
+ { PKA1, FIFOST_TYPE_PKHA_A1 },
+ { PKA2, FIFOST_TYPE_PKHA_A2 },
+ { PKA3, FIFOST_TYPE_PKHA_A3 },
+ { PKB0, FIFOST_TYPE_PKHA_B0 },
+ { PKB1, FIFOST_TYPE_PKHA_B1 },
+ { PKB2, FIFOST_TYPE_PKHA_B2 },
+ { PKB3, FIFOST_TYPE_PKHA_B3 },
+ { PKA, FIFOST_TYPE_PKHA_A },
+ { PKB, FIFOST_TYPE_PKHA_B },
+ { PKN, FIFOST_TYPE_PKHA_N },
+ { PKE, FIFOST_TYPE_PKHA_E_JKEK },
+ { RNG, FIFOST_TYPE_RNGSTORE },
+ { RNGOFIFO, FIFOST_TYPE_RNGFIFO },
+ { AFHA_SBOX, FIFOST_TYPE_AF_SBOX_JKEK },
+ { MDHA_SPLIT_KEY, FIFOST_CLASS_CLASS2KEY | FIFOST_TYPE_SPLIT_KEK },
+ { MSG, FIFOST_TYPE_MESSAGE_DATA },
+ { KEY1, FIFOST_CLASS_CLASS1KEY | FIFOST_TYPE_KEY_KEK },
+ { KEY2, FIFOST_CLASS_CLASS2KEY | FIFOST_TYPE_KEY_KEK },
+ { OFIFO, FIFOST_TYPE_OUTFIFO_KEK},
+ { SKIP, FIFOST_TYPE_SKIP },
+/*22*/ { METADATA, FIFOST_TYPE_METADATA}
+};
+
+/*
+ * Allowed FIFO_STORE output data types for each SEC Era.
+ * Values represent the number of entries from fifo_store_table[] that are
+ * supported.
+ */
+static const unsigned fifo_store_table_sz[] = {21, 21, 21, 21, 22, 22, 22, 22};
+
+static inline int rta_fifo_store(struct program *program, uint32_t src,
+ uint32_t encrypt_flags, uint64_t dst,
+ uint32_t length, uint32_t flags)
+{
+ uint32_t opcode = 0;
+ uint32_t val = 0;
+ int ret = -EINVAL;
+ bool is_seq_cmd = false;
+ unsigned start_pc = program->current_pc;
+
+ /* write command type field */
+ if (flags & SEQ) {
+ opcode = CMD_SEQ_FIFO_STORE;
+ is_seq_cmd = true;
+ } else {
+ opcode = CMD_FIFO_STORE;
+ }
+
+ /* Parameter checking */
+ if (is_seq_cmd) {
+ if ((flags & VLF) && ((length >> 16) || (flags & EXT))) {
+ pr_err("SEQ FIFO STORE: Invalid usage of VLF\n");
+ goto err;
+ }
+ if (dst) {
+ pr_err("SEQ FIFO STORE: Invalid command\n");
+ goto err;
+ }
+ if ((src == METADATA) && (flags & (CONT | EXT))) {
+ pr_err("SEQ FIFO STORE: Invalid flags\n");
+ goto err;
+ }
+ } else {
+ if (((src == RNGOFIFO) && ((dst) || (flags & EXT))) ||
+ (src == METADATA)) {
+ pr_err("FIFO STORE: Invalid destination\n");
+ goto err;
+ }
+ }
+ if ((rta_sec_era == RTA_SEC_ERA_7) && (src == AFHA_SBOX)) {
+ pr_err("FIFO STORE: AFHA S-box not supported by SEC Era %d\n",
+ USER_SEC_ERA(rta_sec_era));
+ goto err;
+ }
+
+ /* write output data type field */
+ ret = __rta_map_opcode(src, fifo_store_table,
+ fifo_store_table_sz[rta_sec_era], &val);
+ if (ret < 0) {
+ pr_err("FIFO STORE: Source type not supported. SEC Program Line: %d\n",
+ program->current_pc);
+ goto err;
+ }
+ opcode |= val;
+
+ if (encrypt_flags & TK)
+ opcode |= (0x1 << FIFOST_TYPE_SHIFT);
+ if (encrypt_flags & EKT) {
+ if (rta_sec_era == RTA_SEC_ERA_1) {
+ pr_err("FIFO STORE: AES-CCM source types not supported\n");
+ ret = -EINVAL;
+ goto err;
+ }
+ opcode |= (0x10 << FIFOST_TYPE_SHIFT);
+ opcode &= (uint32_t)~(0x20 << FIFOST_TYPE_SHIFT);
+ }
+
+ /* write flags fields */
+ if (flags & CONT)
+ opcode |= FIFOST_CONT;
+ if ((flags & VLF) && (is_seq_cmd))
+ opcode |= FIFOLDST_VLF;
+ if ((flags & SGF) && (!is_seq_cmd))
+ opcode |= FIFOLDST_SGF;
+ if (flags & CLASS1)
+ opcode |= FIFOST_CLASS_CLASS1KEY;
+ if (flags & CLASS2)
+ opcode |= FIFOST_CLASS_CLASS2KEY;
+ if (flags & BOTH)
+ opcode |= FIFOST_CLASS_BOTH;
+
+ /* Verify if extended length is required */
+ if ((length >> 16) || (flags & EXT))
+ opcode |= FIFOLDST_EXT;
+ else
+ opcode |= (uint16_t) length;
+
+ __rta_out32(program, opcode);
+ program->current_instruction++;
+
+ /* write pointer field */
+ if ((!is_seq_cmd) && (dst))
+ __rta_out64(program, program->ps, dst);
+
+ /* write extended length field */
+ if (opcode & FIFOLDST_EXT)
+ __rta_out32(program, length);
+
+ return (int)start_pc;
+
+ err:
+ program->first_error_pc = start_pc;
+ program->current_instruction++;
+ return ret;
+}
+
+#endif /* __RTA_FIFO_LOAD_STORE_CMD_H__ */
diff --git a/drivers/crypto/caam/flib/rta/header_cmd.h b/drivers/crypto/caam/flib/rta/header_cmd.h
new file mode 100644
index 000000000000..256465fed844
--- /dev/null
+++ b/drivers/crypto/caam/flib/rta/header_cmd.h
@@ -0,0 +1,209 @@
+/* Copyright 2008-2013 Freescale Semiconductor, Inc. */
+
+#ifndef __RTA_HEADER_CMD_H__
+#define __RTA_HEADER_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+/* Allowed job header flags for each SEC Era. */
+static const uint32_t job_header_flags[] = {
+ DNR | TD | MTD | SHR | REO,
+ DNR | TD | MTD | SHR | REO | RSMS,
+ DNR | TD | MTD | SHR | REO | RSMS,
+ DNR | TD | MTD | SHR | REO | RSMS,
+ DNR | TD | MTD | SHR | REO | RSMS | EXT,
+ DNR | TD | MTD | SHR | REO | RSMS | EXT,
+ DNR | TD | MTD | SHR | REO | RSMS | EXT,
+ DNR | TD | MTD | SHR | REO | EXT
+};
+
+/* Allowed shared header flags for each SEC Era. */
+static const uint32_t shr_header_flags[] = {
+ DNR | SC | PD,
+ DNR | SC | PD | CIF,
+ DNR | SC | PD | CIF,
+ DNR | SC | PD | CIF | RIF,
+ DNR | SC | PD | CIF | RIF,
+ DNR | SC | PD | CIF | RIF,
+ DNR | SC | PD | CIF | RIF,
+ DNR | SC | PD | CIF | RIF
+};
+
+static inline int rta_shr_header(struct program *program,
+ enum rta_share_type share, unsigned start_idx,
+ uint32_t flags)
+{
+ uint32_t opcode = CMD_SHARED_DESC_HDR;
+ unsigned start_pc = program->current_pc;
+
+ if (flags & ~shr_header_flags[rta_sec_era]) {
+ pr_err("SHR_DESC: Flag(s) not supported by SEC Era %d\n",
+ USER_SEC_ERA(rta_sec_era));
+ goto err;
+ }
+
+ switch (share) {
+ case SHR_ALWAYS:
+ opcode |= HDR_SHARE_ALWAYS;
+ break;
+ case SHR_SERIAL:
+ opcode |= HDR_SHARE_SERIAL;
+ break;
+ case SHR_NEVER:
+ /*
+ * opcode |= HDR_SHARE_NEVER;
+ * HDR_SHARE_NEVER is 0
+ */
+ break;
+ case SHR_WAIT:
+ opcode |= HDR_SHARE_WAIT;
+ break;
+ default:
+ pr_err("SHR_DESC: SHARE VALUE is not supported. SEC Program Line: %d\n",
+ program->current_pc);
+ goto err;
+ }
+
+ opcode |= HDR_ONE;
+ opcode |= (start_idx << HDR_START_IDX_SHIFT) & HDR_START_IDX_MASK;
+
+ if (flags & DNR)
+ opcode |= HDR_DNR;
+ if (flags & CIF)
+ opcode |= HDR_CLEAR_IFIFO;
+ if (flags & SC)
+ opcode |= HDR_SAVECTX;
+ if (flags & PD)
+ opcode |= HDR_PROP_DNR;
+ if (flags & RIF)
+ opcode |= HDR_RIF;
+
+ __rta_out32(program, opcode);
+ program->current_instruction++;
+
+ if (program->current_instruction == 1)
+ program->shrhdr = program->buffer;
+
+ return (int)start_pc;
+
+ err:
+ program->first_error_pc = start_pc;
+ program->current_instruction++;
+ return -EINVAL;
+}
+
+static inline int rta_job_header(struct program *program,
+ enum rta_share_type share, unsigned start_idx,
+ uint64_t shr_desc, uint32_t flags,
+ uint32_t ext_flags)
+{
+ uint32_t opcode = CMD_DESC_HDR;
+ uint32_t hdr_ext = 0;
+ unsigned start_pc = program->current_pc;
+
+ if (flags & ~job_header_flags[rta_sec_era]) {
+ pr_err("JOB_DESC: Flag(s) not supported by SEC Era %d\n",
+ USER_SEC_ERA(rta_sec_era));
+ goto err;
+ }
+
+ switch (share) {
+ case SHR_ALWAYS:
+ opcode |= HDR_SHARE_ALWAYS;
+ break;
+ case SHR_SERIAL:
+ opcode |= HDR_SHARE_SERIAL;
+ break;
+ case SHR_NEVER:
+ /*
+ * opcode |= HDR_SHARE_NEVER;
+ * HDR_SHARE_NEVER is 0
+ */
+ break;
+ case SHR_WAIT:
+ opcode |= HDR_SHARE_WAIT;
+ break;
+ case SHR_DEFER:
+ opcode |= HDR_SHARE_DEFER;
+ break;
+ default:
+ pr_err("JOB_DESC: SHARE VALUE is not supported. SEC Program Line: %d\n",
+ program->current_pc);
+ goto err;
+ }
+
+ if ((flags & TD) && (flags & REO)) {
+ pr_err("JOB_DESC: REO flag not supported for trusted descriptors. SEC Program Line: %d\n",
+ program->current_pc);
+ goto err;
+ }
+
+ if ((rta_sec_era < RTA_SEC_ERA_7) && (flags & MTD) && !(flags & TD)) {
+ pr_err("JOB_DESC: Trying to MTD a descriptor that is not a TD. SEC Program Line: %d\n",
+ program->current_pc);
+ goto err;
+ }
+
+ if ((flags & EXT) && !(flags & SHR) && (start_idx < 2)) {
+ pr_err("JOB_DESC: Start index must be >= 2 in case of no SHR and EXT. SEC Program Line: %d\n",
+ program->current_pc);
+ goto err;
+ }
+
+ opcode |= HDR_ONE;
+ opcode |= ((start_idx << HDR_START_IDX_SHIFT) & HDR_START_IDX_MASK);
+
+ if (flags & EXT) {
+ opcode |= HDR_EXT;
+
+ if (ext_flags & DSV) {
+ hdr_ext |= HDR_EXT_DSEL_VALID;
+ hdr_ext |= ext_flags & DSEL_MASK;
+ }
+
+ if (ext_flags & FTD) {
+ if (rta_sec_era <= RTA_SEC_ERA_5) {
+ pr_err("JOB_DESC: Fake trusted descriptor not supported by SEC Era %d\n",
+ USER_SEC_ERA(rta_sec_era));
+ goto err;
+ }
+
+ hdr_ext |= HDR_EXT_FTD;
+ }
+ }
+ if (flags & RSMS)
+ opcode |= HDR_RSLS;
+ if (flags & DNR)
+ opcode |= HDR_DNR;
+ if (flags & TD)
+ opcode |= HDR_TRUSTED;
+ if (flags & MTD)
+ opcode |= HDR_MAKE_TRUSTED;
+ if (flags & REO)
+ opcode |= HDR_REVERSE;
+ if (flags & SHR)
+ opcode |= HDR_SHARED;
+
+ __rta_out32(program, opcode);
+ program->current_instruction++;
+
+ if (program->current_instruction == 1) {
+ program->jobhdr = program->buffer;
+
+ if (opcode & HDR_SHARED)
+ __rta_out64(program, program->ps, shr_desc);
+ }
+
+ if (flags & EXT)
+ __rta_out32(program, hdr_ext);
+
+ /* Note: descriptor length is set in program_finalize routine */
+ return (int)start_pc;
+
+ err:
+ program->first_error_pc = start_pc;
+ program->current_instruction++;
+ return -EINVAL;
+}
+
+#endif /* __RTA_HEADER_CMD_H__ */
diff --git a/drivers/crypto/caam/flib/rta/jump_cmd.h b/drivers/crypto/caam/flib/rta/jump_cmd.h
new file mode 100644
index 000000000000..c22b83e890ea
--- /dev/null
+++ b/drivers/crypto/caam/flib/rta/jump_cmd.h
@@ -0,0 +1,168 @@
+/* Copyright 2008-2013 Freescale Semiconductor, Inc. */
+
+#ifndef __RTA_JUMP_CMD_H__
+#define __RTA_JUMP_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+static const uint32_t jump_test_cond[][2] = {
+ { NIFP, JUMP_COND_NIFP },
+ { NIP, JUMP_COND_NIP },
+ { NOP, JUMP_COND_NOP },
+ { NCP, JUMP_COND_NCP },
+ { CALM, JUMP_COND_CALM },
+ { SELF, JUMP_COND_SELF },
+ { SHRD, JUMP_COND_SHRD },
+ { JQP, JUMP_COND_JQP },
+ { MATH_Z, JUMP_COND_MATH_Z },
+ { MATH_N, JUMP_COND_MATH_N },
+ { MATH_NV, JUMP_COND_MATH_NV },
+ { MATH_C, JUMP_COND_MATH_C },
+ { PK_0, JUMP_COND_PK_0 },
+ { PK_GCD_1, JUMP_COND_PK_GCD_1 },
+ { PK_PRIME, JUMP_COND_PK_PRIME },
+ { CLASS1, JUMP_CLASS_CLASS1 },
+ { CLASS2, JUMP_CLASS_CLASS2 },
+ { BOTH, JUMP_CLASS_BOTH }
+};
+
+static const uint32_t jump_test_math_cond[][2] = {
+ { MATH_Z, JUMP_COND_MATH_Z },
+ { MATH_N, JUMP_COND_MATH_N },
+ { MATH_NV, JUMP_COND_MATH_NV },
+ { MATH_C, JUMP_COND_MATH_C }
+};
+
+static const uint32_t jump_src_dst[][2] = {
+ { MATH0, JUMP_SRC_DST_MATH0 },
+ { MATH1, JUMP_SRC_DST_MATH1 },
+ { MATH2, JUMP_SRC_DST_MATH2 },
+ { MATH3, JUMP_SRC_DST_MATH3 },
+ { DPOVRD, JUMP_SRC_DST_DPOVRD },
+ { SEQINSZ, JUMP_SRC_DST_SEQINLEN },
+ { SEQOUTSZ, JUMP_SRC_DST_SEQOUTLEN },
+ { VSEQINSZ, JUMP_SRC_DST_VARSEQINLEN },
+ { VSEQOUTSZ, JUMP_SRC_DST_VARSEQOUTLEN }
+};
+
+static inline int rta_jump(struct program *program, uint64_t address,
+ enum rta_jump_type jump_type,
+ enum rta_jump_cond test_type,
+ uint32_t test_condition, uint32_t src_dst)
+{
+ uint32_t opcode = CMD_JUMP;
+ unsigned start_pc = program->current_pc;
+ int ret = -EINVAL;
+
+ if (((jump_type == GOSUB) || (jump_type == RETURN)) &&
+ (rta_sec_era < RTA_SEC_ERA_4)) {
+ pr_err("JUMP: Jump type not supported by SEC Era %d\n",
+ USER_SEC_ERA(rta_sec_era));
+ goto err;
+ }
+
+ if (((jump_type == LOCAL_JUMP_INC) || (jump_type == LOCAL_JUMP_DEC)) &&
+ (rta_sec_era <= RTA_SEC_ERA_5)) {
+ pr_err("JUMP_INCDEC: Jump type not supported by SEC Era %d\n",
+ USER_SEC_ERA(rta_sec_era));
+ goto err;
+ }
+
+ switch (jump_type) {
+ case (LOCAL_JUMP):
+ /*
+ * opcode |= JUMP_TYPE_LOCAL;
+ * JUMP_TYPE_LOCAL is 0
+ */
+ break;
+ case (HALT):
+ opcode |= JUMP_TYPE_HALT;
+ break;
+ case (HALT_STATUS):
+ opcode |= JUMP_TYPE_HALT_USER;
+ break;
+ case (FAR_JUMP):
+ opcode |= JUMP_TYPE_NONLOCAL;
+ break;
+ case (GOSUB):
+ opcode |= JUMP_TYPE_GOSUB;
+ break;
+ case (RETURN):
+ opcode |= JUMP_TYPE_RETURN;
+ break;
+ case (LOCAL_JUMP_INC):
+ opcode |= JUMP_TYPE_LOCAL_INC;
+ break;
+ case (LOCAL_JUMP_DEC):
+ opcode |= JUMP_TYPE_LOCAL_DEC;
+ break;
+ default:
+ pr_err("JUMP: Invalid jump type. SEC Program Line: %d\n",
+ program->current_pc);
+ goto err;
+ }
+
+ switch (test_type) {
+ case (ALL_TRUE):
+ /*
+ * opcode |= JUMP_TEST_ALL;
+ * JUMP_TEST_ALL is 0
+ */
+ break;
+ case (ALL_FALSE):
+ opcode |= JUMP_TEST_INVALL;
+ break;
+ case (ANY_TRUE):
+ opcode |= JUMP_TEST_ANY;
+ break;
+ case (ANY_FALSE):
+ opcode |= JUMP_TEST_INVANY;
+ break;
+ default:
+ pr_err("JUMP: test type not supported. SEC Program Line: %d\n",
+ program->current_pc);
+ goto err;
+ }
+
+ /* write test condition field */
+ if ((jump_type != LOCAL_JUMP_INC) && (jump_type != LOCAL_JUMP_DEC)) {
+ __rta_map_flags(test_condition, jump_test_cond,
+ ARRAY_SIZE(jump_test_cond), &opcode);
+ } else {
+ uint32_t val = 0;
+
+ ret = __rta_map_opcode(src_dst, jump_src_dst,
+ ARRAY_SIZE(jump_src_dst), &val);
+ if (ret < 0) {
+ pr_err("JUMP_INCDEC: SRC_DST not supported. SEC PC: %d; Instr: %d\n",
+ program->current_pc,
+ program->current_instruction);
+ goto err;
+ }
+ opcode |= val;
+
+ __rta_map_flags(test_condition, jump_test_math_cond,
+ ARRAY_SIZE(jump_test_math_cond), &opcode);
+ }
+
+ /* write local offset field for local jumps and user-defined halt */
+ if ((jump_type == LOCAL_JUMP) || (jump_type == LOCAL_JUMP_INC) ||
+ (jump_type == LOCAL_JUMP_DEC) || (jump_type == GOSUB) ||
+ (jump_type == HALT_STATUS))
+ opcode |= (uint32_t)(address & JUMP_OFFSET_MASK);
+
+ __rta_out32(program, opcode);
+ program->current_instruction++;
+
+ if (jump_type == FAR_JUMP)
+ __rta_out64(program, program->ps, address);
+
+ return (int)start_pc;
+
+ err:
+ program->first_error_pc = start_pc;
+ program->current_instruction++;
+ return ret;
+}
+
+#endif /* __RTA_JUMP_CMD_H__ */
diff --git a/drivers/crypto/caam/flib/rta/key_cmd.h b/drivers/crypto/caam/flib/rta/key_cmd.h
new file mode 100644
index 000000000000..0763d78d282c
--- /dev/null
+++ b/drivers/crypto/caam/flib/rta/key_cmd.h
@@ -0,0 +1,183 @@
+/* Copyright 2008-2013 Freescale Semiconductor, Inc. */
+
+#ifndef __RTA_KEY_CMD_H__
+#define __RTA_KEY_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+/* Allowed encryption flags for each SEC Era */
+static const uint32_t key_enc_flags[] = {
+ ENC,
+ ENC | NWB | EKT | TK,
+ ENC | NWB | EKT | TK,
+ ENC | NWB | EKT | TK,
+ ENC | NWB | EKT | TK,
+ ENC | NWB | EKT | TK,
+ ENC | NWB | EKT | TK | PTS,
+ ENC | NWB | EKT | TK | PTS
+};
+
+static inline int rta_key(struct program *program, uint32_t key_dst,
+ uint32_t encrypt_flags, uint64_t src, uint32_t length,
+ uint32_t flags)
+{
+ uint32_t opcode = 0;
+ bool is_seq_cmd = false;
+ unsigned start_pc = program->current_pc;
+
+ if (encrypt_flags & ~key_enc_flags[rta_sec_era]) {
+ pr_err("KEY: Flag(s) not supported by SEC Era %d\n",
+ USER_SEC_ERA(rta_sec_era));
+ goto err;
+ }
+
+ /* write cmd type */
+ if (flags & SEQ) {
+ opcode = CMD_SEQ_KEY;
+ is_seq_cmd = true;
+ } else {
+ opcode = CMD_KEY;
+ }
+
+ /* check parameters */
+ if (is_seq_cmd) {
+ if ((flags & IMMED) || (flags & SGF)) {
+ pr_err("SEQKEY: Invalid flag. SEC PC: %d; Instr: %d\n",
+ program->current_pc,
+ program->current_instruction);
+ goto err;
+ }
+ if ((rta_sec_era <= RTA_SEC_ERA_5) &&
+ ((flags & VLF) || (flags & AIDF))) {
+ pr_err("SEQKEY: Flag(s) not supported by SEC Era %d\n",
+ USER_SEC_ERA(rta_sec_era));
+ goto err;
+ }
+ } else {
+ if ((flags & AIDF) || (flags & VLF)) {
+ pr_err("KEY: Invalid flag. SEC PC: %d; Instr: %d\n",
+ program->current_pc,
+ program->current_instruction);
+ goto err;
+ }
+ if ((flags & SGF) && (flags & IMMED)) {
+ pr_err("KEY: Invalid flag. SEC PC: %d; Instr: %d\n",
+ program->current_pc,
+ program->current_instruction);
+ goto err;
+ }
+ }
+
+ if ((encrypt_flags & PTS) &&
+ ((encrypt_flags & ENC) || (encrypt_flags & NWB) ||
+ (key_dst == PKE))) {
+ pr_err("KEY: Invalid flag / destination. SEC PC: %d; Instr: %d\n",
+ program->current_pc, program->current_instruction);
+ goto err;
+ }
+
+ if (key_dst == AFHA_SBOX) {
+ if (rta_sec_era == RTA_SEC_ERA_7) {
+ pr_err("KEY: AFHA S-box not supported by SEC Era %d\n",
+ USER_SEC_ERA(rta_sec_era));
+ goto err;
+ }
+
+ if (flags & IMMED) {
+ pr_err("KEY: Invalid flag. SEC PC: %d; Instr: %d\n",
+ program->current_pc,
+ program->current_instruction);
+ goto err;
+ }
+
+ /*
+ * Sbox data loaded into the ARC-4 processor must be exactly
+ * 258 bytes long, or else a data sequence error is generated.
+ */
+ if (length != 258) {
+ pr_err("KEY: Invalid length. SEC PC: %d; Instr: %d\n",
+ program->current_pc,
+ program->current_instruction);
+ goto err;
+ }
+ }
+
+ /* write key destination and class fields */
+ switch (key_dst) {
+ case (KEY1):
+ opcode |= KEY_DEST_CLASS1;
+ break;
+ case (KEY2):
+ opcode |= KEY_DEST_CLASS2;
+ break;
+ case (PKE):
+ opcode |= KEY_DEST_CLASS1 | KEY_DEST_PKHA_E;
+ break;
+ case (AFHA_SBOX):
+ opcode |= KEY_DEST_CLASS1 | KEY_DEST_AFHA_SBOX;
+ break;
+ case (MDHA_SPLIT_KEY):
+ opcode |= KEY_DEST_CLASS2 | KEY_DEST_MDHA_SPLIT;
+ break;
+ default:
+ pr_err("KEY: Invalid destination. SEC PC: %d; Instr: %d\n",
+ program->current_pc, program->current_instruction);
+ goto err;
+ }
+
+ /* write key length */
+ length &= KEY_LENGTH_MASK;
+ opcode |= length;
+
+ /* write key command specific flags */
+ if (encrypt_flags & ENC) {
+ /* Encrypted (black) keys must be padded to 8 bytes (CCM) or
+ 16 bytes (ECB) depending on EKT bit. AES-CCM encrypted keys
+ (EKT = 1) have 6-byte nonce and 6-byte MAC after padding.
+ */
+ opcode |= KEY_ENC;
+ if (encrypt_flags & EKT) {
+ opcode |= KEY_EKT;
+ length = ALIGN(length, 8);
+ length += 12;
+ } else {
+ length = ALIGN(length, 16);
+ }
+ if (encrypt_flags & TK)
+ opcode |= KEY_TK;
+ }
+ if (encrypt_flags & NWB)
+ opcode |= KEY_NWB;
+ if (encrypt_flags & PTS)
+ opcode |= KEY_PTS;
+
+ /* write general command flags */
+ if (!is_seq_cmd) {
+ if (flags & IMMED)
+ opcode |= KEY_IMM;
+ if (flags & SGF)
+ opcode |= KEY_SGF;
+ } else {
+ if (flags & AIDF)
+ opcode |= KEY_AIDF;
+ if (flags & VLF)
+ opcode |= KEY_VLF;
+ }
+
+ __rta_out32(program, opcode);
+ program->current_instruction++;
+
+ if (flags & IMMED)
+ __rta_inline_data(program, src, flags & __COPY_MASK, length);
+ else
+ __rta_out64(program, program->ps, src);
+
+ return (int)start_pc;
+
+ err:
+ program->first_error_pc = start_pc;
+ program->current_instruction++;
+ return -EINVAL;
+}
+
+#endif /* __RTA_KEY_CMD_H__ */
diff --git a/drivers/crypto/caam/flib/rta/load_cmd.h b/drivers/crypto/caam/flib/rta/load_cmd.h
new file mode 100644
index 000000000000..c86c527f1c37
--- /dev/null
+++ b/drivers/crypto/caam/flib/rta/load_cmd.h
@@ -0,0 +1,297 @@
+/* Copyright 2008-2013 Freescale Semiconductor, Inc. */
+
+#ifndef __RTA_LOAD_CMD_H__
+#define __RTA_LOAD_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+/* Allowed length and offset masks for each SEC Era in case DST = DCTRL */
+static const uint32_t load_len_mask_allowed[] = {
+ 0x000000ee,
+ 0x000000fe,
+ 0x000000fe,
+ 0x000000fe,
+ 0x000000fe,
+ 0x000000fe,
+ 0x000000fe,
+ 0x000000fe
+};
+
+static const uint32_t load_off_mask_allowed[] = {
+ 0x0000000f,
+ 0x000000ff,
+ 0x000000ff,
+ 0x000000ff,
+ 0x000000ff,
+ 0x000000ff,
+ 0x000000ff,
+ 0x000000ff
+};
+
+#define IMM_MUST 0
+#define IMM_CAN 1
+#define IMM_NO 2
+#define IMM_DSNM 3 /* it doesn't matter the src type */
+
+enum e_lenoff {
+ LENOF_03,
+ LENOF_4,
+ LENOF_48,
+ LENOF_448,
+ LENOF_18,
+ LENOF_32,
+ LENOF_24,
+ LENOF_16,
+ LENOF_8,
+ LENOF_128,
+ LENOF_256,
+ DSNM /* it doesn't matter the length/offset values */
+};
+
+struct load_map {
+ uint32_t dst;
+ uint32_t dst_opcode;
+ enum e_lenoff len_off;
+ uint8_t imm_src;
+
+};
+
+static const struct load_map load_dst[] = {
+/*1*/ { KEY1SZ, LDST_CLASS_1_CCB | LDST_SRCDST_WORD_KEYSZ_REG,
+ LENOF_4, IMM_MUST },
+ { KEY2SZ, LDST_CLASS_2_CCB | LDST_SRCDST_WORD_KEYSZ_REG,
+ LENOF_4, IMM_MUST },
+ { DATA1SZ, LDST_CLASS_1_CCB | LDST_SRCDST_WORD_DATASZ_REG,
+ LENOF_448, IMM_MUST },
+ { DATA2SZ, LDST_CLASS_2_CCB | LDST_SRCDST_WORD_DATASZ_REG,
+ LENOF_448, IMM_MUST },
+ { ICV1SZ, LDST_CLASS_1_CCB | LDST_SRCDST_WORD_ICVSZ_REG,
+ LENOF_4, IMM_MUST },
+ { ICV2SZ, LDST_CLASS_2_CCB | LDST_SRCDST_WORD_ICVSZ_REG,
+ LENOF_4, IMM_MUST },
+ { CCTRL, LDST_CLASS_IND_CCB | LDST_SRCDST_WORD_CHACTRL,
+ LENOF_4, IMM_MUST },
+ { DCTRL, LDST_CLASS_DECO | LDST_IMM | LDST_SRCDST_WORD_DECOCTRL,
+ DSNM, IMM_DSNM },
+ { ICTRL, LDST_CLASS_IND_CCB | LDST_SRCDST_WORD_IRQCTRL,
+ LENOF_4, IMM_MUST },
+ { DPOVRD, LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_PCLOVRD,
+ LENOF_4, IMM_MUST },
+ { CLRW, LDST_CLASS_IND_CCB | LDST_SRCDST_WORD_CLRW,
+ LENOF_4, IMM_MUST },
+ { AAD1SZ, LDST_CLASS_1_CCB | LDST_SRCDST_WORD_DECO_AAD_SZ,
+ LENOF_4, IMM_MUST },
+ { IV1SZ, LDST_CLASS_1_CCB | LDST_SRCDST_WORD_CLASS1_IV_SZ,
+ LENOF_4, IMM_MUST },
+ { ALTDS1, LDST_CLASS_1_CCB | LDST_SRCDST_WORD_ALTDS_CLASS1,
+ LENOF_448, IMM_MUST },
+ { PKASZ, LDST_CLASS_1_CCB | LDST_SRCDST_WORD_PKHA_A_SZ,
+ LENOF_4, IMM_MUST, },
+ { PKBSZ, LDST_CLASS_1_CCB | LDST_SRCDST_WORD_PKHA_B_SZ,
+ LENOF_4, IMM_MUST },
+ { PKNSZ, LDST_CLASS_1_CCB | LDST_SRCDST_WORD_PKHA_N_SZ,
+ LENOF_4, IMM_MUST },
+ { PKESZ, LDST_CLASS_1_CCB | LDST_SRCDST_WORD_PKHA_E_SZ,
+ LENOF_4, IMM_MUST },
+ { NFIFO, LDST_CLASS_IND_CCB | LDST_SRCDST_WORD_INFO_FIFO,
+ LENOF_48, IMM_MUST },
+ { IFIFO, LDST_SRCDST_BYTE_INFIFO, LENOF_18, IMM_MUST },
+ { OFIFO, LDST_SRCDST_BYTE_OUTFIFO, LENOF_18, IMM_MUST },
+ { MATH0, LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_MATH0,
+ LENOF_32, IMM_CAN },
+ { MATH1, LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_MATH1,
+ LENOF_24, IMM_CAN },
+ { MATH2, LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_MATH2,
+ LENOF_16, IMM_CAN },
+ { MATH3, LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_MATH3,
+ LENOF_8, IMM_CAN },
+ { CONTEXT1, LDST_CLASS_1_CCB | LDST_SRCDST_BYTE_CONTEXT,
+ LENOF_128, IMM_CAN },
+ { CONTEXT2, LDST_CLASS_2_CCB | LDST_SRCDST_BYTE_CONTEXT,
+ LENOF_128, IMM_CAN },
+ { KEY1, LDST_CLASS_1_CCB | LDST_SRCDST_BYTE_KEY,
+ LENOF_32, IMM_CAN },
+ { KEY2, LDST_CLASS_2_CCB | LDST_SRCDST_BYTE_KEY,
+ LENOF_32, IMM_CAN },
+ { DESCBUF, LDST_CLASS_DECO | LDST_SRCDST_WORD_DESCBUF,
+ LENOF_256, IMM_NO },
+ { DPID, LDST_CLASS_DECO | LDST_SRCDST_WORD_PID,
+ LENOF_448, IMM_MUST },
+/*32*/ { IDFNS, LDST_SRCDST_WORD_IFNSR, LENOF_18, IMM_MUST },
+ { ODFNS, LDST_SRCDST_WORD_OFNSR, LENOF_18, IMM_MUST },
+ { ALTSOURCE, LDST_SRCDST_BYTE_ALTSOURCE, LENOF_18, IMM_MUST },
+/*35*/ { NFIFO_SZL, LDST_SRCDST_WORD_INFO_FIFO_SZL, LENOF_48, IMM_MUST },
+ { NFIFO_SZM, LDST_SRCDST_WORD_INFO_FIFO_SZM, LENOF_03, IMM_MUST },
+ { NFIFO_L, LDST_SRCDST_WORD_INFO_FIFO_L, LENOF_48, IMM_MUST },
+ { NFIFO_M, LDST_SRCDST_WORD_INFO_FIFO_M, LENOF_03, IMM_MUST },
+ { SZL, LDST_SRCDST_WORD_SZL, LENOF_48, IMM_MUST },
+/*40*/ { SZM, LDST_SRCDST_WORD_SZM, LENOF_03, IMM_MUST }
+};
+
+/*
+ * Allowed LOAD destinations for each SEC Era.
+ * Values represent the number of entries from load_dst[] that are supported.
+ */
+static const unsigned load_dst_sz[] = { 31, 34, 34, 40, 40, 40, 40, 40 };
+
+static inline int load_check_len_offset(int pos, uint32_t length,
+ uint32_t offset)
+{
+ if ((load_dst[pos].dst == DCTRL) &&
+ ((length & ~load_len_mask_allowed[rta_sec_era]) ||
+ (offset & ~load_off_mask_allowed[rta_sec_era])))
+ goto err;
+
+ switch (load_dst[pos].len_off) {
+ case (LENOF_03):
+ if ((length > 3) || (offset))
+ goto err;
+ break;
+ case (LENOF_4):
+ if ((length != 4) || (offset != 0))
+ goto err;
+ break;
+ case (LENOF_48):
+ if (!(((length == 4) && (offset == 0)) ||
+ ((length == 8) && (offset == 0))))
+ goto err;
+ break;
+ case (LENOF_448):
+ if (!(((length == 4) && (offset == 0)) ||
+ ((length == 4) && (offset == 4)) ||
+ ((length == 8) && (offset == 0))))
+ goto err;
+ break;
+ case (LENOF_18):
+ if ((length < 1) || (length > 8) || (offset != 0))
+ goto err;
+ break;
+ case (LENOF_32):
+ if ((length > 32) || (offset > 32) || ((offset + length) > 32))
+ goto err;
+ break;
+ case (LENOF_24):
+ if ((length > 24) || (offset > 24) || ((offset + length) > 24))
+ goto err;
+ break;
+ case (LENOF_16):
+ if ((length > 16) || (offset > 16) || ((offset + length) > 16))
+ goto err;
+ break;
+ case (LENOF_8):
+ if ((length > 8) || (offset > 8) || ((offset + length) > 8))
+ goto err;
+ break;
+ case (LENOF_128):
+ if ((length > 128) || (offset > 128) ||
+ ((offset + length) > 128))
+ goto err;
+ break;
+ case (LENOF_256):
+ if ((length < 1) || (length > 256) || ((length + offset) > 256))
+ goto err;
+ break;
+ case (DSNM):
+ break;
+ default:
+ goto err;
+ break;
+ }
+
+ return 0;
+err:
+ return -EINVAL;
+}
+
+static inline int rta_load(struct program *program, uint64_t src, uint64_t dst,
+ uint32_t offset, uint32_t length, uint32_t flags)
+{
+ uint32_t opcode = 0;
+ int pos = -1, ret = -EINVAL;
+ unsigned start_pc = program->current_pc, i;
+
+ if (flags & SEQ)
+ opcode = CMD_SEQ_LOAD;
+ else
+ opcode = CMD_LOAD;
+
+ if ((length & 0xffffff00) || (offset & 0xffffff00)) {
+ pr_err("LOAD: Bad length/offset passed. Should be 8 bits\n");
+ goto err;
+ }
+
+ if (flags & SGF)
+ opcode |= LDST_SGF;
+ if (flags & VLF)
+ opcode |= LDST_VLF;
+
+ /* check load destination, length and offset and source type */
+ for (i = 0; i < load_dst_sz[rta_sec_era]; i++)
+ if (dst == load_dst[i].dst) {
+ pos = (int)i;
+ break;
+ }
+ if (-1 == pos) {
+ pr_err("LOAD: Invalid dst. SEC Program Line: %d\n",
+ program->current_pc);
+ goto err;
+ }
+
+ if (flags & IMMED) {
+ if (load_dst[pos].imm_src == IMM_NO) {
+ pr_err("LOAD: Invalid source type. SEC Program Line: %d\n",
+ program->current_pc);
+ goto err;
+ }
+ opcode |= LDST_IMM;
+ } else if (load_dst[pos].imm_src == IMM_MUST) {
+ pr_err("LOAD IMM: Invalid source type. SEC Program Line: %d\n",
+ program->current_pc);
+ goto err;
+ }
+
+ ret = load_check_len_offset(pos, length, offset);
+ if (ret < 0) {
+ pr_err("LOAD: Invalid length/offset. SEC Program Line: %d\n",
+ program->current_pc);
+ goto err;
+ }
+
+ opcode |= load_dst[pos].dst_opcode;
+
+ /* DESC BUFFER: length / offset values are specified in 4-byte words */
+ if (dst == DESCBUF) {
+ opcode |= (length >> 2);
+ opcode |= ((offset >> 2) << LDST_OFFSET_SHIFT);
+ } else {
+ opcode |= length;
+ opcode |= (offset << LDST_OFFSET_SHIFT);
+ }
+
+ __rta_out32(program, opcode);
+ program->current_instruction++;
+
+ /* DECO CONTROL: skip writing pointer of imm data */
+ if (dst == DCTRL)
+ return (int)start_pc;
+
+ /*
+ * For data copy, 3 possible ways to specify how to copy data:
+ * - IMMED & !COPY: copy data directly from src( max 8 bytes)
+ * - IMMED & COPY: copy data imm from the location specified by user
+ * - !IMMED and is not SEQ cmd: copy the address
+ */
+ if (flags & IMMED)
+ __rta_inline_data(program, src, flags & __COPY_MASK, length);
+ else if (!(flags & SEQ))
+ __rta_out64(program, program->ps, src);
+
+ return (int)start_pc;
+
+ err:
+ program->first_error_pc = start_pc;
+ program->current_instruction++;
+ return ret;
+}
+
+#endif /* __RTA_LOAD_CMD_H__*/
diff --git a/drivers/crypto/caam/flib/rta/math_cmd.h b/drivers/crypto/caam/flib/rta/math_cmd.h
new file mode 100644
index 000000000000..cc47d0843075
--- /dev/null
+++ b/drivers/crypto/caam/flib/rta/math_cmd.h
@@ -0,0 +1,362 @@
+/* Copyright 2008-2013 Freescale Semiconductor, Inc. */
+
+#ifndef __RTA_MATH_CMD_H__
+#define __RTA_MATH_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+static const uint32_t math_op1[][2] = {
+/*1*/ { MATH0, MATH_SRC0_REG0 },
+ { MATH1, MATH_SRC0_REG1 },
+ { MATH2, MATH_SRC0_REG2 },
+ { MATH3, MATH_SRC0_REG3 },
+ { SEQINSZ, MATH_SRC0_SEQINLEN },
+ { SEQOUTSZ, MATH_SRC0_SEQOUTLEN },
+ { VSEQINSZ, MATH_SRC0_VARSEQINLEN },
+ { VSEQOUTSZ, MATH_SRC0_VARSEQOUTLEN },
+ { ZERO, MATH_SRC0_ZERO },
+/*10*/ { NONE, 0 }, /* dummy value */
+ { DPOVRD, MATH_SRC0_DPOVRD },
+ { ONE, MATH_SRC0_ONE }
+};
+
+/*
+ * Allowed MATH op1 sources for each SEC Era.
+ * Values represent the number of entries from math_op1[] that are supported.
+ */
+static const unsigned math_op1_sz[] = {10, 10, 12, 12, 12, 12, 12, 12};
+
+static const uint32_t math_op2[][2] = {
+/*1*/ { MATH0, MATH_SRC1_REG0 },
+ { MATH1, MATH_SRC1_REG1 },
+ { MATH2, MATH_SRC1_REG2 },
+ { MATH3, MATH_SRC1_REG3 },
+ { ABD, MATH_SRC1_INFIFO },
+ { OFIFO, MATH_SRC1_OUTFIFO },
+ { ONE, MATH_SRC1_ONE },
+/*8*/ { NONE, 0 }, /* dummy value */
+ { JOBSRC, MATH_SRC1_JOBSOURCE },
+ { DPOVRD, MATH_SRC1_DPOVRD },
+ { VSEQINSZ, MATH_SRC1_VARSEQINLEN },
+ { VSEQOUTSZ, MATH_SRC1_VARSEQOUTLEN },
+/*13*/ { ZERO, MATH_SRC1_ZERO }
+};
+
+/*
+ * Allowed MATH op2 sources for each SEC Era.
+ * Values represent the number of entries from math_op2[] that are supported.
+ */
+static const unsigned math_op2_sz[] = {8, 9, 13, 13, 13, 13, 13, 13};
+
+static const uint32_t math_result[][2] = {
+/*1*/ { MATH0, MATH_DEST_REG0 },
+ { MATH1, MATH_DEST_REG1 },
+ { MATH2, MATH_DEST_REG2 },
+ { MATH3, MATH_DEST_REG3 },
+ { SEQINSZ, MATH_DEST_SEQINLEN },
+ { SEQOUTSZ, MATH_DEST_SEQOUTLEN },
+ { VSEQINSZ, MATH_DEST_VARSEQINLEN },
+ { VSEQOUTSZ, MATH_DEST_VARSEQOUTLEN },
+/*9*/ { NONE, MATH_DEST_NONE },
+ { DPOVRD, MATH_DEST_DPOVRD }
+};
+
+/*
+ * Allowed MATH result destinations for each SEC Era.
+ * Values represent the number of entries from math_result[] that are
+ * supported.
+ */
+static const unsigned math_result_sz[] = {9, 9, 10, 10, 10, 10, 10, 10};
+
+static inline int rta_math(struct program *program, uint64_t operand1,
+ uint32_t op, uint64_t operand2, uint32_t result,
+ int length, uint32_t options)
+{
+ uint32_t opcode = CMD_MATH;
+ uint32_t val = 0;
+ int ret = -EINVAL;
+ unsigned start_pc = program->current_pc;
+
+ if (((op == MATH_FUN_BSWAP) && (rta_sec_era < RTA_SEC_ERA_4)) ||
+ ((op == MATH_FUN_ZBYT) && (rta_sec_era < RTA_SEC_ERA_2))) {
+ pr_err("MATH: operation not supported by SEC Era %d. SEC PC: %d; Instr: %d\n",
+ USER_SEC_ERA(rta_sec_era), program->current_pc,
+ program->current_instruction);
+ goto err;
+ }
+
+ if (options & SWP) {
+ if (rta_sec_era < RTA_SEC_ERA_7) {
+ pr_err("MATH: operation not supported by SEC Era %d. SEC PC: %d; Instr: %d\n",
+ USER_SEC_ERA(rta_sec_era), program->current_pc,
+ program->current_instruction);
+ goto err;
+ }
+
+ if ((options & IFB) ||
+ (!(options & IMMED) && !(options & IMMED2)) ||
+ ((options & IMMED) && (options & IMMED2))) {
+ pr_err("MATH: SWP - invalid configuration. SEC PC: %d; Instr: %d\n",
+ program->current_pc,
+ program->current_instruction);
+ goto err;
+ }
+ }
+
+ /*
+ * SHLD operation is different from others and we
+ * assume that we can have _NONE as first operand
+ * or _SEQINSZ as second operand
+ */
+ if ((op != MATH_FUN_SHLD) && ((operand1 == NONE) ||
+ (operand2 == SEQINSZ))) {
+ pr_err("MATH: Invalid operand. SEC PC: %d; Instr: %d\n",
+ program->current_pc, program->current_instruction);
+ goto err;
+ }
+
+ /*
+ * We first check if it is unary operation. In that
+ * case second operand must be _NONE
+ */
+ if (((op == MATH_FUN_ZBYT) || (op == MATH_FUN_BSWAP)) &&
+ (operand2 != NONE)) {
+ pr_err("MATH: Invalid operand2. SEC PC: %d; Instr: %d\n",
+ program->current_pc, program->current_instruction);
+ goto err;
+ }
+
+ /* Write first operand field */
+ if (options & IMMED) {
+ opcode |= MATH_SRC0_IMM;
+ } else {
+ ret = __rta_map_opcode((uint32_t)operand1, math_op1,
+ math_op1_sz[rta_sec_era], &val);
+ if (ret < 0) {
+ pr_err("MATH: operand1 not supported. SEC PC: %d; Instr: %d\n",
+ program->current_pc,
+ program->current_instruction);
+ goto err;
+ }
+ opcode |= val;
+ }
+
+ /* Write second operand field */
+ if (options & IMMED2) {
+ opcode |= MATH_SRC1_IMM;
+ } else {
+ ret = __rta_map_opcode((uint32_t)operand2, math_op2,
+ math_op2_sz[rta_sec_era], &val);
+ if (ret < 0) {
+ pr_err("MATH: operand2 not supported. SEC PC: %d; Instr: %d\n",
+ program->current_pc,
+ program->current_instruction);
+ goto err;
+ }
+ opcode |= val;
+ }
+
+ /* Write result field */
+ ret = __rta_map_opcode(result, math_result, math_result_sz[rta_sec_era],
+ &val);
+ if (ret < 0) {
+ pr_err("MATH: result not supported. SEC PC: %d; Instr: %d\n",
+ program->current_pc, program->current_instruction);
+ goto err;
+ }
+ opcode |= val;
+
+ /*
+ * as we encode operations with their "real" values, we do not
+ * to translate but we do need to validate the value
+ */
+ switch (op) {
+ /*Binary operators */
+ case (MATH_FUN_ADD):
+ case (MATH_FUN_ADDC):
+ case (MATH_FUN_SUB):
+ case (MATH_FUN_SUBB):
+ case (MATH_FUN_OR):
+ case (MATH_FUN_AND):
+ case (MATH_FUN_XOR):
+ case (MATH_FUN_LSHIFT):
+ case (MATH_FUN_RSHIFT):
+ case (MATH_FUN_SHLD):
+ /* Unary operators */
+ case (MATH_FUN_ZBYT):
+ case (MATH_FUN_BSWAP):
+ opcode |= op;
+ break;
+ default:
+ pr_err("MATH: operator is not supported. SEC PC: %d; Instr: %d\n",
+ program->current_pc, program->current_instruction);
+ ret = -EINVAL;
+ goto err;
+ }
+
+ opcode |= (options & ~(IMMED | IMMED2));
+
+ /* Verify length */
+ switch (length) {
+ case (1):
+ opcode |= MATH_LEN_1BYTE;
+ break;
+ case (2):
+ opcode |= MATH_LEN_2BYTE;
+ break;
+ case (4):
+ opcode |= MATH_LEN_4BYTE;
+ break;
+ case (8):
+ opcode |= MATH_LEN_8BYTE;
+ break;
+ default:
+ pr_err("MATH: length is not supported. SEC PC: %d; Instr: %d\n",
+ program->current_pc, program->current_instruction);
+ ret = -EINVAL;
+ goto err;
+ }
+
+ __rta_out32(program, opcode);
+ program->current_instruction++;
+
+ /* Write immediate value */
+ if ((options & IMMED) && !(options & IMMED2)) {
+ __rta_out64(program, (length > 4) && !(options & IFB),
+ operand1);
+ } else if ((options & IMMED2) && !(options & IMMED)) {
+ __rta_out64(program, (length > 4) && !(options & IFB),
+ operand2);
+ } else if ((options & IMMED) && (options & IMMED2)) {
+ __rta_out32(program, lower_32_bits(operand1));
+ __rta_out32(program, lower_32_bits(operand2));
+ }
+
+ return (int)start_pc;
+
+ err:
+ program->first_error_pc = start_pc;
+ program->current_instruction++;
+ return ret;
+}
+
+static inline int rta_mathi(struct program *program, uint64_t operand,
+ uint32_t op, uint8_t imm, uint32_t result,
+ int length, uint32_t options)
+{
+ uint32_t opcode = CMD_MATHI;
+ uint32_t val = 0;
+ int ret = -EINVAL;
+ unsigned start_pc = program->current_pc;
+
+ if (rta_sec_era < RTA_SEC_ERA_6) {
+ pr_err("MATHI: Command not supported by SEC Era %d. SEC PC: %d; Instr: %d\n",
+ USER_SEC_ERA(rta_sec_era), program->current_pc,
+ program->current_instruction);
+ goto err;
+ }
+
+ if (((op == MATH_FUN_FBYT) && (options & SSEL))) {
+ pr_err("MATHI: Illegal combination - FBYT and SSEL. SEC PC: %d; Instr: %d\n",
+ program->current_pc, program->current_instruction);
+ goto err;
+ }
+
+ if ((options & SWP) && (rta_sec_era < RTA_SEC_ERA_7)) {
+ pr_err("MATHI: SWP not supported by SEC Era %d. SEC PC: %d; Instr: %d\n",
+ USER_SEC_ERA(rta_sec_era), program->current_pc,
+ program->current_instruction);
+ goto err;
+ }
+
+ /* Write first operand field */
+ if (!(options & SSEL))
+ ret = __rta_map_opcode((uint32_t)operand, math_op1,
+ math_op1_sz[rta_sec_era], &val);
+ else
+ ret = __rta_map_opcode((uint32_t)operand, math_op2,
+ math_op2_sz[rta_sec_era], &val);
+ if (ret < 0) {
+ pr_err("MATHI: operand not supported. SEC PC: %d; Instr: %d\n",
+ program->current_pc, program->current_instruction);
+ goto err;
+ }
+
+ if (!(options & SSEL))
+ opcode |= val;
+ else
+ opcode |= (val << (MATHI_SRC1_SHIFT - MATH_SRC1_SHIFT));
+
+ /* Write second operand field */
+ opcode |= (imm << MATHI_IMM_SHIFT);
+
+ /* Write result field */
+ ret = __rta_map_opcode(result, math_result, math_result_sz[rta_sec_era],
+ &val);
+ if (ret < 0) {
+ pr_err("MATHI: result not supported. SEC PC: %d; Instr: %d\n",
+ program->current_pc, program->current_instruction);
+ goto err;
+ }
+ opcode |= (val << (MATHI_DEST_SHIFT - MATH_DEST_SHIFT));
+
+ /*
+ * as we encode operations with their "real" values, we do not have to
+ * translate but we do need to validate the value
+ */
+ switch (op) {
+ case (MATH_FUN_ADD):
+ case (MATH_FUN_ADDC):
+ case (MATH_FUN_SUB):
+ case (MATH_FUN_SUBB):
+ case (MATH_FUN_OR):
+ case (MATH_FUN_AND):
+ case (MATH_FUN_XOR):
+ case (MATH_FUN_LSHIFT):
+ case (MATH_FUN_RSHIFT):
+ case (MATH_FUN_FBYT):
+ opcode |= op;
+ break;
+ default:
+ pr_err("MATHI: operator not supported. SEC PC: %d; Instr: %d\n",
+ program->current_pc, program->current_instruction);
+ ret = -EINVAL;
+ goto err;
+ }
+
+ opcode |= options;
+
+ /* Verify length */
+ switch (length) {
+ case (1):
+ opcode |= MATH_LEN_1BYTE;
+ break;
+ case (2):
+ opcode |= MATH_LEN_2BYTE;
+ break;
+ case (4):
+ opcode |= MATH_LEN_4BYTE;
+ break;
+ case (8):
+ opcode |= MATH_LEN_8BYTE;
+ break;
+ default:
+ pr_err("MATHI: length %d not supported. SEC PC: %d; Instr: %d\n",
+ length, program->current_pc,
+ program->current_instruction);
+ ret = -EINVAL;
+ goto err;
+ }
+
+ __rta_out32(program, opcode);
+ program->current_instruction++;
+
+ return (int)start_pc;
+
+ err:
+ program->first_error_pc = start_pc;
+ program->current_instruction++;
+ return ret;
+}
+
+#endif /* __RTA_MATH_CMD_H__ */
diff --git a/drivers/crypto/caam/flib/rta/move_cmd.h b/drivers/crypto/caam/flib/rta/move_cmd.h
new file mode 100644
index 000000000000..b12086ae835a
--- /dev/null
+++ b/drivers/crypto/caam/flib/rta/move_cmd.h
@@ -0,0 +1,401 @@
+/* Copyright 2008-2013 Freescale Semiconductor, Inc. */
+
+#ifndef __RTA_MOVE_CMD_H__
+#define __RTA_MOVE_CMD_H__
+
+#define MOVE_SET_AUX_SRC 0x01
+#define MOVE_SET_AUX_DST 0x02
+#define MOVE_SET_AUX_LS 0x03
+#define MOVE_SET_LEN_16b 0x04
+
+#define MOVE_SET_AUX_MATH 0x10
+#define MOVE_SET_AUX_MATH_SRC (MOVE_SET_AUX_SRC | MOVE_SET_AUX_MATH)
+#define MOVE_SET_AUX_MATH_DST (MOVE_SET_AUX_DST | MOVE_SET_AUX_MATH)
+
+#define MASK_16b 0xFF
+
+/* MOVE command type */
+#define __MOVE 1
+#define __MOVEB 2
+#define __MOVEDW 3
+
+extern enum rta_sec_era rta_sec_era;
+
+static const uint32_t move_src_table[][2] = {
+/*1*/ { CONTEXT1, MOVE_SRC_CLASS1CTX },
+ { CONTEXT2, MOVE_SRC_CLASS2CTX },
+ { OFIFO, MOVE_SRC_OUTFIFO },
+ { DESCBUF, MOVE_SRC_DESCBUF },
+ { MATH0, MOVE_SRC_MATH0 },
+ { MATH1, MOVE_SRC_MATH1 },
+ { MATH2, MOVE_SRC_MATH2 },
+ { MATH3, MOVE_SRC_MATH3 },
+/*9*/ { IFIFOABD, MOVE_SRC_INFIFO },
+ { IFIFOAB1, MOVE_SRC_INFIFO_CL | MOVE_AUX_LS },
+ { IFIFOAB2, MOVE_SRC_INFIFO_CL },
+/*12*/ { ABD, MOVE_SRC_INFIFO_NO_NFIFO },
+ { AB1, MOVE_SRC_INFIFO_NO_NFIFO | MOVE_AUX_LS },
+ { AB2, MOVE_SRC_INFIFO_NO_NFIFO | MOVE_AUX_MS }
+};
+
+/* Allowed MOVE / MOVE_LEN sources for each SEC Era.
+ * Values represent the number of entries from move_src_table[] that are
+ * supported.
+ */
+static const unsigned move_src_table_sz[] = {9, 11, 14, 14, 14, 14, 14, 14};
+
+static const uint32_t move_dst_table[][2] = {
+/*1*/ { CONTEXT1, MOVE_DEST_CLASS1CTX },
+ { CONTEXT2, MOVE_DEST_CLASS2CTX },
+ { OFIFO, MOVE_DEST_OUTFIFO },
+ { DESCBUF, MOVE_DEST_DESCBUF },
+ { MATH0, MOVE_DEST_MATH0 },
+ { MATH1, MOVE_DEST_MATH1 },
+ { MATH2, MOVE_DEST_MATH2 },
+ { MATH3, MOVE_DEST_MATH3 },
+ { IFIFOAB1, MOVE_DEST_CLASS1INFIFO },
+ { IFIFOAB2, MOVE_DEST_CLASS2INFIFO },
+ { PKA, MOVE_DEST_PK_A },
+ { KEY1, MOVE_DEST_CLASS1KEY },
+ { KEY2, MOVE_DEST_CLASS2KEY },
+/*14*/ { IFIFO, MOVE_DEST_INFIFO },
+/*15*/ { ALTSOURCE, MOVE_DEST_ALTSOURCE}
+};
+
+/* Allowed MOVE / MOVE_LEN destinations for each SEC Era.
+ * Values represent the number of entries from move_dst_table[] that are
+ * supported.
+ */
+static const unsigned move_dst_table_sz[] = {13, 14, 14, 15, 15, 15, 15, 15};
+
+static inline int set_move_offset(struct program *program, uint64_t src,
+ uint16_t src_offset, uint64_t dst,
+ uint16_t dst_offset, uint16_t *offset,
+ uint16_t *opt);
+
+static inline int math_offset(uint16_t offset);
+
+static inline int rta_move(struct program *program, int cmd_type, uint64_t src,
+ uint16_t src_offset, uint64_t dst,
+ uint16_t dst_offset, uint32_t length, uint32_t flags)
+{
+ uint32_t opcode = 0;
+ uint16_t offset = 0, opt = 0;
+ uint32_t val = 0;
+ int ret = -EINVAL;
+ bool is_move_len_cmd = false;
+ unsigned start_pc = program->current_pc;
+
+ if ((rta_sec_era < RTA_SEC_ERA_7) && (cmd_type != __MOVE)) {
+ pr_err("MOVE: MOVEB / MOVEDW not supported by SEC Era %d. SEC PC: %d; Instr: %d\n",
+ USER_SEC_ERA(rta_sec_era), program->current_pc,
+ program->current_instruction);
+ goto err;
+ }
+
+ /* write command type */
+ if (cmd_type == __MOVEB) {
+ opcode = CMD_MOVEB;
+ } else if (cmd_type == __MOVEDW) {
+ opcode = CMD_MOVEDW;
+ } else if (!(flags & IMMED)) {
+ if (rta_sec_era < RTA_SEC_ERA_3) {
+ pr_err("MOVE: MOVE_LEN not supported by SEC Era %d. SEC PC: %d; Instr: %d\n",
+ USER_SEC_ERA(rta_sec_era), program->current_pc,
+ program->current_instruction);
+ goto err;
+ }
+
+ if ((length != MATH0) && (length != MATH1) &&
+ (length != MATH2) && (length != MATH3)) {
+ pr_err("MOVE: MOVE_LEN length must be MATH[0-3]. SEC PC: %d; Instr: %d\n",
+ program->current_pc,
+ program->current_instruction);
+ goto err;
+ }
+
+ opcode = CMD_MOVE_LEN;
+ is_move_len_cmd = true;
+ } else {
+ opcode = CMD_MOVE;
+ }
+
+ /* write offset first, to check for invalid combinations or incorrect
+ * offset values sooner; decide which offset should be here
+ * (src or dst)
+ */
+ ret = set_move_offset(program, src, src_offset, dst, dst_offset,
+ &offset, &opt);
+ if (ret < 0)
+ goto err;
+
+ opcode |= (offset << MOVE_OFFSET_SHIFT) & MOVE_OFFSET_MASK;
+
+ /* set AUX field if required */
+ if (opt == MOVE_SET_AUX_SRC) {
+ opcode |= ((src_offset / 16) << MOVE_AUX_SHIFT) & MOVE_AUX_MASK;
+ } else if (opt == MOVE_SET_AUX_DST) {
+ opcode |= ((dst_offset / 16) << MOVE_AUX_SHIFT) & MOVE_AUX_MASK;
+ } else if (opt == MOVE_SET_AUX_LS) {
+ opcode |= MOVE_AUX_LS;
+ } else if (opt & MOVE_SET_AUX_MATH) {
+ if (opt & MOVE_SET_AUX_SRC)
+ offset = src_offset;
+ else
+ offset = dst_offset;
+
+ if (rta_sec_era < RTA_SEC_ERA_6) {
+ if (offset)
+ pr_debug("MOVE: Offset not supported by SEC Era %d. SEC PC: %d; Instr: %d\n",
+ USER_SEC_ERA(rta_sec_era),
+ program->current_pc,
+ program->current_instruction);
+ /* nothing to do for offset = 0 */
+ } else {
+ ret = math_offset(offset);
+ if (ret < 0) {
+ pr_err("MOVE: Invalid offset in MATH register. SEC PC: %d; Instr: %d\n",
+ program->current_pc,
+ program->current_instruction);
+ goto err;
+ }
+
+ opcode |= (uint32_t)ret;
+ }
+ }
+
+ /* write source field */
+ ret = __rta_map_opcode((uint32_t)src, move_src_table,
+ move_src_table_sz[rta_sec_era], &val);
+ if (ret < 0) {
+ pr_err("MOVE: Invalid SRC. SEC PC: %d; Instr: %d\n",
+ program->current_pc, program->current_instruction);
+ goto err;
+ }
+ opcode |= val;
+
+ /* write destination field */
+ ret = __rta_map_opcode((uint32_t)dst, move_dst_table,
+ move_dst_table_sz[rta_sec_era], &val);
+ if (ret < 0) {
+ pr_err("MOVE: Invalid DST. SEC PC: %d; Instr: %d\n",
+ program->current_pc, program->current_instruction);
+ goto err;
+ }
+ opcode |= val;
+
+ /* write flags */
+ if (flags & (FLUSH1 | FLUSH2))
+ opcode |= MOVE_AUX_MS;
+ if (flags & (LAST2 | LAST1))
+ opcode |= MOVE_AUX_LS;
+ if (flags & WAITCOMP)
+ opcode |= MOVE_WAITCOMP;
+
+ if (!is_move_len_cmd) {
+ /* write length */
+ if (opt == MOVE_SET_LEN_16b)
+ opcode |= (length & (MOVE_OFFSET_MASK | MOVE_LEN_MASK));
+ else
+ opcode |= (length & MOVE_LEN_MASK);
+ } else {
+ /* write mrsel */
+ switch (length) {
+ case (MATH0):
+ /*
+ * opcode |= MOVELEN_MRSEL_MATH0;
+ * MOVELEN_MRSEL_MATH0 is 0
+ */
+ break;
+ case (MATH1):
+ opcode |= MOVELEN_MRSEL_MATH1;
+ break;
+ case (MATH2):
+ opcode |= MOVELEN_MRSEL_MATH2;
+ break;
+ case (MATH3):
+ opcode |= MOVELEN_MRSEL_MATH3;
+ break;
+ }
+
+ /* write size */
+ if (rta_sec_era >= RTA_SEC_ERA_7) {
+ if (flags & SIZE_WORD)
+ opcode |= MOVELEN_SIZE_WORD;
+ else if (flags & SIZE_BYTE)
+ opcode |= MOVELEN_SIZE_BYTE;
+ else if (flags & SIZE_DWORD)
+ opcode |= MOVELEN_SIZE_DWORD;
+ }
+ }
+
+ __rta_out32(program, opcode);
+ program->current_instruction++;
+
+ return (int)start_pc;
+
+ err:
+ program->first_error_pc = start_pc;
+ program->current_instruction++;
+ return ret;
+}
+
+static inline int set_move_offset(struct program *program, uint64_t src,
+ uint16_t src_offset, uint64_t dst,
+ uint16_t dst_offset, uint16_t *offset,
+ uint16_t *opt)
+{
+ switch (src) {
+ case (CONTEXT1):
+ case (CONTEXT2):
+ if (dst == DESCBUF) {
+ *opt = MOVE_SET_AUX_SRC;
+ *offset = dst_offset;
+ } else if ((dst == KEY1) || (dst == KEY2)) {
+ if ((src_offset) && (dst_offset)) {
+ pr_err("MOVE: Bad offset. SEC PC: %d; Instr: %d\n",
+ program->current_pc,
+ program->current_instruction);
+ goto err;
+ }
+ if (dst_offset) {
+ *opt = MOVE_SET_AUX_LS;
+ *offset = dst_offset;
+ } else {
+ *offset = src_offset;
+ }
+ } else {
+ if ((dst == MATH0) || (dst == MATH1) ||
+ (dst == MATH2) || (dst == MATH3)) {
+ *opt = MOVE_SET_AUX_MATH_DST;
+ } else if (((dst == OFIFO) || (dst == ALTSOURCE)) &&
+ (src_offset % 4)) {
+ pr_err("MOVE: Bad offset alignment. SEC PC: %d; Instr: %d\n",
+ program->current_pc,
+ program->current_instruction);
+ goto err;
+ }
+
+ *offset = src_offset;
+ }
+ break;
+
+ case (OFIFO):
+ if (dst == OFIFO) {
+ pr_err("MOVE: Invalid DST. SEC PC: %d; Instr: %d\n",
+ program->current_pc,
+ program->current_instruction);
+ goto err;
+ }
+ if (((dst == IFIFOAB1) || (dst == IFIFOAB2) ||
+ (dst == IFIFO) || (dst == PKA)) &&
+ (src_offset || dst_offset)) {
+ pr_err("MOVE: Offset should be zero. SEC PC: %d; Instr: %d\n",
+ program->current_pc,
+ program->current_instruction);
+ goto err;
+ }
+ *offset = dst_offset;
+ break;
+
+ case (DESCBUF):
+ if ((dst == CONTEXT1) || (dst == CONTEXT2)) {
+ *opt = MOVE_SET_AUX_DST;
+ } else if ((dst == MATH0) || (dst == MATH1) ||
+ (dst == MATH2) || (dst == MATH3)) {
+ *opt = MOVE_SET_AUX_MATH_DST;
+ } else if (dst == DESCBUF) {
+ pr_err("MOVE: Invalid DST. SEC PC: %d; Instr: %d\n",
+ program->current_pc,
+ program->current_instruction);
+ goto err;
+ } else if (((dst == OFIFO) || (dst == ALTSOURCE)) &&
+ (src_offset % 4)) {
+ pr_err("MOVE: Invalid offset alignment. SEC PC: %d; Instr %d\n",
+ program->current_pc,
+ program->current_instruction);
+ goto err;
+ }
+
+ *offset = src_offset;
+ break;
+
+ case (MATH0):
+ case (MATH1):
+ case (MATH2):
+ case (MATH3):
+ if ((dst == OFIFO) || (dst == ALTSOURCE)) {
+ if (src_offset % 4) {
+ pr_err("MOVE: Bad offset alignment. SEC PC: %d; Instr: %d\n",
+ program->current_pc,
+ program->current_instruction);
+ goto err;
+ }
+ *offset = src_offset;
+ } else if ((dst == IFIFOAB1) || (dst == IFIFOAB2) ||
+ (dst == IFIFO) || (dst == PKA)) {
+ *offset = src_offset;
+ } else {
+ *offset = dst_offset;
+
+ /*
+ * This condition is basically the negation of:
+ * dst in { CONTEXT[1-2], MATH[0-3] }
+ */
+ if ((dst != KEY1) && (dst != KEY2))
+ *opt = MOVE_SET_AUX_MATH_SRC;
+ }
+ break;
+
+ case (IFIFOABD):
+ case (IFIFOAB1):
+ case (IFIFOAB2):
+ case (ABD):
+ case (AB1):
+ case (AB2):
+ if ((dst == IFIFOAB1) || (dst == IFIFOAB2) ||
+ (dst == IFIFO) || (dst == PKA) || (dst == ALTSOURCE)) {
+ pr_err("MOVE: Bad DST. SEC PC: %d; Instr: %d\n",
+ program->current_pc,
+ program->current_instruction);
+ goto err;
+ } else {
+ if (dst == OFIFO) {
+ *opt = MOVE_SET_LEN_16b;
+ } else {
+ if (dst_offset % 4) {
+ pr_err("MOVE: Bad offset alignment. SEC PC: %d; Instr: %d\n",
+ program->current_pc,
+ program->current_instruction);
+ goto err;
+ }
+ *offset = dst_offset;
+ }
+ }
+ break;
+ default:
+ break;
+ }
+
+ return 0;
+ err:
+ return -EINVAL;
+}
+
+static inline int math_offset(uint16_t offset)
+{
+ switch (offset) {
+ case 0:
+ return 0;
+ case 4:
+ return MOVE_AUX_LS;
+ case 6:
+ return MOVE_AUX_MS;
+ case 7:
+ return MOVE_AUX_LS | MOVE_AUX_MS;
+ }
+
+ return -EINVAL;
+}
+
+#endif /* __RTA_MOVE_CMD_H__ */
diff --git a/drivers/crypto/caam/flib/rta/nfifo_cmd.h b/drivers/crypto/caam/flib/rta/nfifo_cmd.h
new file mode 100644
index 000000000000..899796929e1b
--- /dev/null
+++ b/drivers/crypto/caam/flib/rta/nfifo_cmd.h
@@ -0,0 +1,157 @@
+/* Copyright 2008-2013 Freescale Semiconductor, Inc. */
+
+#ifndef __RTA_NFIFO_CMD_H__
+#define __RTA_NFIFO_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+static const uint32_t nfifo_src[][2] = {
+/*1*/ { IFIFO, NFIFOENTRY_STYPE_DFIFO },
+ { OFIFO, NFIFOENTRY_STYPE_OFIFO },
+ { PAD, NFIFOENTRY_STYPE_PAD },
+/*4*/ { MSGOUTSNOOP, NFIFOENTRY_STYPE_SNOOP | NFIFOENTRY_DEST_BOTH },
+/*5*/ { ALTSOURCE, NFIFOENTRY_STYPE_ALTSOURCE },
+ { OFIFO_SYNC, NFIFOENTRY_STYPE_OFIFO_SYNC },
+/*7*/ { MSGOUTSNOOP_ALT, NFIFOENTRY_STYPE_SNOOP_ALT | NFIFOENTRY_DEST_BOTH }
+};
+
+/*
+ * Allowed NFIFO LOAD sources for each SEC Era.
+ * Values represent the number of entries from nfifo_src[] that are supported.
+ */
+static const unsigned nfifo_src_sz[] = {4, 5, 5, 5, 5, 5, 5, 7};
+
+static const uint32_t nfifo_data[][2] = {
+ { MSG, NFIFOENTRY_DTYPE_MSG },
+ { MSG1, NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_MSG },
+ { MSG2, NFIFOENTRY_DEST_CLASS2 | NFIFOENTRY_DTYPE_MSG },
+ { IV1, NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_IV },
+ { IV2, NFIFOENTRY_DEST_CLASS2 | NFIFOENTRY_DTYPE_IV },
+ { ICV1, NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_ICV },
+ { ICV2, NFIFOENTRY_DEST_CLASS2 | NFIFOENTRY_DTYPE_ICV },
+ { SAD1, NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_SAD },
+ { AAD1, NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_AAD },
+ { AAD2, NFIFOENTRY_DEST_CLASS2 | NFIFOENTRY_DTYPE_AAD },
+ { AFHA_SBOX, NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_SBOX },
+ { SKIP, NFIFOENTRY_DTYPE_SKIP },
+ { PKE, NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_E },
+ { PKN, NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_N },
+ { PKA, NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_A },
+ { PKA0, NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_A0 },
+ { PKA1, NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_A1 },
+ { PKA2, NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_A2 },
+ { PKA3, NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_A3 },
+ { PKB, NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_B },
+ { PKB0, NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_B0 },
+ { PKB1, NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_B1 },
+ { PKB2, NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_B2 },
+ { PKB3, NFIFOENTRY_DEST_CLASS1 | NFIFOENTRY_DTYPE_PK_B3 },
+ { AB1, NFIFOENTRY_DEST_CLASS1 },
+ { AB2, NFIFOENTRY_DEST_CLASS2 },
+ { ABD, NFIFOENTRY_DEST_DECO }
+};
+
+static const uint32_t nfifo_flags[][2] = {
+/*1*/ { LAST1, NFIFOENTRY_LC1 },
+ { LAST2, NFIFOENTRY_LC2 },
+ { FLUSH1, NFIFOENTRY_FC1 },
+ { BP, NFIFOENTRY_BND },
+ { PAD_ZERO, NFIFOENTRY_PTYPE_ZEROS },
+ { PAD_NONZERO, NFIFOENTRY_PTYPE_RND_NOZEROS },
+ { PAD_INCREMENT, NFIFOENTRY_PTYPE_INCREMENT },
+ { PAD_RANDOM, NFIFOENTRY_PTYPE_RND },
+ { PAD_ZERO_N1, NFIFOENTRY_PTYPE_ZEROS_NZ },
+ { PAD_NONZERO_0, NFIFOENTRY_PTYPE_RND_NZ_LZ },
+ { PAD_N1, NFIFOENTRY_PTYPE_N },
+/*12*/ { PAD_NONZERO_N, NFIFOENTRY_PTYPE_RND_NZ_N },
+ { FLUSH2, NFIFOENTRY_FC2 },
+ { OC, NFIFOENTRY_OC }
+};
+
+/*
+ * Allowed NFIFO LOAD flags for each SEC Era.
+ * Values represent the number of entries from nfifo_flags[] that are supported.
+ */
+static const unsigned nfifo_flags_sz[] = {12, 14, 14, 14, 14, 14, 14, 14};
+
+static const uint32_t nfifo_pad_flags[][2] = {
+ { BM, NFIFOENTRY_BM },
+ { PS, NFIFOENTRY_PS },
+ { PR, NFIFOENTRY_PR }
+};
+
+/*
+ * Allowed NFIFO LOAD pad flags for each SEC Era.
+ * Values represent the number of entries from nfifo_pad_flags[] that are
+ * supported.
+ */
+static const unsigned nfifo_pad_flags_sz[] = {2, 2, 2, 2, 3, 3, 3, 3};
+
+static inline int rta_nfifo_load(struct program *program, uint32_t src,
+ uint32_t data, uint32_t length, uint32_t flags)
+{
+ uint32_t opcode = 0, val;
+ int ret = -EINVAL;
+ uint32_t load_cmd = CMD_LOAD | LDST_IMM | LDST_CLASS_IND_CCB |
+ LDST_SRCDST_WORD_INFO_FIFO;
+ unsigned start_pc = program->current_pc;
+
+ if ((data == AFHA_SBOX) && (rta_sec_era == RTA_SEC_ERA_7)) {
+ pr_err("NFIFO: AFHA S-box not supported by SEC Era %d\n",
+ USER_SEC_ERA(rta_sec_era));
+ goto err;
+ }
+
+ /* write source field */
+ ret = __rta_map_opcode(src, nfifo_src, nfifo_src_sz[rta_sec_era], &val);
+ if (ret < 0) {
+ pr_err("NFIFO: Invalid SRC. SEC PC: %d; Instr: %d\n",
+ program->current_pc, program->current_instruction);
+ goto err;
+ }
+ opcode |= val;
+
+ /* write type field */
+ ret = __rta_map_opcode(data, nfifo_data, ARRAY_SIZE(nfifo_data), &val);
+ if (ret < 0) {
+ pr_err("NFIFO: Invalid data. SEC PC: %d; Instr: %d\n",
+ program->current_pc, program->current_instruction);
+ goto err;
+ }
+ opcode |= val;
+
+ /* write DL field */
+ if (!(flags & EXT)) {
+ opcode |= length & NFIFOENTRY_DLEN_MASK;
+ load_cmd |= 4;
+ } else {
+ load_cmd |= 8;
+ }
+
+ /* write flags */
+ __rta_map_flags(flags, nfifo_flags, nfifo_flags_sz[rta_sec_era],
+ &opcode);
+
+ /* in case of padding, check the destination */
+ if (src == PAD)
+ __rta_map_flags(flags, nfifo_pad_flags,
+ nfifo_pad_flags_sz[rta_sec_era], &opcode);
+
+ /* write LOAD command first */
+ __rta_out32(program, load_cmd);
+ __rta_out32(program, opcode);
+
+ if (flags & EXT)
+ __rta_out32(program, length & NFIFOENTRY_DLEN_MASK);
+
+ program->current_instruction++;
+
+ return (int)start_pc;
+
+ err:
+ program->first_error_pc = start_pc;
+ program->current_instruction++;
+ return ret;
+}
+
+#endif /* __RTA_NFIFO_CMD_H__ */
diff --git a/drivers/crypto/caam/flib/rta/operation_cmd.h b/drivers/crypto/caam/flib/rta/operation_cmd.h
new file mode 100644
index 000000000000..bade9dfdd52e
--- /dev/null
+++ b/drivers/crypto/caam/flib/rta/operation_cmd.h
@@ -0,0 +1,545 @@
+/* Copyright 2008-2013 Freescale Semiconductor, Inc. */
+
+#ifndef __RTA_OPERATION_CMD_H__
+#define __RTA_OPERATION_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+static inline int __rta_alg_aai_aes(uint16_t aai)
+{
+ uint16_t aes_mode = aai & OP_ALG_AESA_MODE_MASK;
+
+ if (aai & OP_ALG_AAI_C2K) {
+ if (rta_sec_era < RTA_SEC_ERA_5)
+ return -1;
+ if ((aes_mode != OP_ALG_AAI_CCM) &&
+ (aes_mode != OP_ALG_AAI_GCM))
+ return -EINVAL;
+ }
+
+ switch (aes_mode) {
+ case OP_ALG_AAI_CBC_CMAC:
+ case OP_ALG_AAI_CTR_CMAC_LTE:
+ case OP_ALG_AAI_CTR_CMAC:
+ if (rta_sec_era < RTA_SEC_ERA_2)
+ return -EINVAL;
+ /* no break */
+ case OP_ALG_AAI_CTR:
+ case OP_ALG_AAI_CBC:
+ case OP_ALG_AAI_ECB:
+ case OP_ALG_AAI_OFB:
+ case OP_ALG_AAI_CFB:
+ case OP_ALG_AAI_XTS:
+ case OP_ALG_AAI_CMAC:
+ case OP_ALG_AAI_XCBC_MAC:
+ case OP_ALG_AAI_CCM:
+ case OP_ALG_AAI_GCM:
+ case OP_ALG_AAI_CBC_XCBCMAC:
+ case OP_ALG_AAI_CTR_XCBCMAC:
+ return 0;
+ }
+
+ return -EINVAL;
+}
+
+static inline int __rta_alg_aai_des(uint16_t aai)
+{
+ uint16_t aai_code = (uint16_t)(aai & ~OP_ALG_AAI_CHECKODD);
+
+ switch (aai_code) {
+ case OP_ALG_AAI_CBC:
+ case OP_ALG_AAI_ECB:
+ case OP_ALG_AAI_CFB:
+ case OP_ALG_AAI_OFB:
+ return 0;
+ }
+
+ return -EINVAL;
+}
+
+static inline int __rta_alg_aai_md5(uint16_t aai)
+{
+ switch (aai) {
+ case OP_ALG_AAI_HMAC:
+ if (rta_sec_era < RTA_SEC_ERA_2)
+ return -EINVAL;
+ /* no break */
+ case OP_ALG_AAI_SMAC:
+ case OP_ALG_AAI_HASH:
+ case OP_ALG_AAI_HMAC_PRECOMP:
+ return 0;
+ }
+
+ return -EINVAL;
+}
+
+static inline int __rta_alg_aai_sha(uint16_t aai)
+{
+ switch (aai) {
+ case OP_ALG_AAI_HMAC:
+ if (rta_sec_era < RTA_SEC_ERA_2)
+ return -EINVAL;
+ /* no break */
+ case OP_ALG_AAI_HASH:
+ case OP_ALG_AAI_HMAC_PRECOMP:
+ return 0;
+ }
+
+ return -EINVAL;
+}
+
+static inline int __rta_alg_aai_rng(uint16_t aai)
+{
+ uint16_t rng_mode = aai & OP_ALG_RNG_MODE_MASK;
+ uint16_t rng_sh = aai & OP_ALG_AAI_RNG4_SH_MASK;
+
+ switch (rng_mode) {
+ case OP_ALG_AAI_RNG:
+ case OP_ALG_AAI_RNG_NZB:
+ case OP_ALG_AAI_RNG_OBP:
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ /* State Handle bits are valid only for SEC Era >= 5 */
+ if ((rta_sec_era < RTA_SEC_ERA_5) && rng_sh)
+ return -EINVAL;
+
+ /* PS, AI, SK bits are also valid only for SEC Era >= 5 */
+ if ((rta_sec_era < RTA_SEC_ERA_5) && (aai &
+ (OP_ALG_AAI_RNG4_PS | OP_ALG_AAI_RNG4_AI | OP_ALG_AAI_RNG4_SK)))
+ return -EINVAL;
+
+ switch (rng_sh) {
+ case OP_ALG_AAI_RNG4_SH_0:
+ case OP_ALG_AAI_RNG4_SH_1:
+ return 0;
+ }
+
+ return -EINVAL;
+}
+
+static inline int __rta_alg_aai_crc(uint16_t aai)
+{
+ uint16_t aai_code = aai & OP_ALG_CRC_POLY_MASK;
+
+ switch (aai_code) {
+ case OP_ALG_AAI_802:
+ case OP_ALG_AAI_3385:
+ case OP_ALG_AAI_CUST_POLY:
+ return 0;
+ }
+
+ return -EINVAL;
+}
+
+static inline int __rta_alg_aai_kasumi(uint16_t aai)
+{
+ switch (aai) {
+ case OP_ALG_AAI_GSM:
+ case OP_ALG_AAI_EDGE:
+ case OP_ALG_AAI_F8:
+ case OP_ALG_AAI_F9:
+ return 0;
+ }
+
+ return -EINVAL;
+}
+
+static inline int __rta_alg_aai_snow_f9(uint16_t aai)
+{
+ if (aai == OP_ALG_AAI_F9)
+ return 0;
+
+ return -EINVAL;
+}
+
+static inline int __rta_alg_aai_snow_f8(uint16_t aai)
+{
+ if (aai == OP_ALG_AAI_F8)
+ return 0;
+
+ return -EINVAL;
+}
+
+static inline int __rta_alg_aai_zuce(uint16_t aai)
+{
+ if (aai == OP_ALG_AAI_F8)
+ return 0;
+
+ return -EINVAL;
+}
+
+static inline int __rta_alg_aai_zuca(uint16_t aai)
+{
+ if (aai == OP_ALG_AAI_F9)
+ return 0;
+
+ return -EINVAL;
+}
+
+struct alg_aai_map {
+ uint32_t chipher_algo;
+ int (*aai_func)(uint16_t);
+ uint32_t class;
+};
+
+static const struct alg_aai_map alg_table[] = {
+/*1*/ { OP_ALG_ALGSEL_AES, __rta_alg_aai_aes, OP_TYPE_CLASS1_ALG },
+ { OP_ALG_ALGSEL_DES, __rta_alg_aai_des, OP_TYPE_CLASS1_ALG },
+ { OP_ALG_ALGSEL_3DES, __rta_alg_aai_des, OP_TYPE_CLASS1_ALG },
+ { OP_ALG_ALGSEL_MD5, __rta_alg_aai_md5, OP_TYPE_CLASS2_ALG },
+ { OP_ALG_ALGSEL_SHA1, __rta_alg_aai_md5, OP_TYPE_CLASS2_ALG },
+ { OP_ALG_ALGSEL_SHA224, __rta_alg_aai_sha, OP_TYPE_CLASS2_ALG },
+ { OP_ALG_ALGSEL_SHA256, __rta_alg_aai_sha, OP_TYPE_CLASS2_ALG },
+ { OP_ALG_ALGSEL_SHA384, __rta_alg_aai_sha, OP_TYPE_CLASS2_ALG },
+ { OP_ALG_ALGSEL_SHA512, __rta_alg_aai_sha, OP_TYPE_CLASS2_ALG },
+ { OP_ALG_ALGSEL_RNG, __rta_alg_aai_rng, OP_TYPE_CLASS1_ALG },
+/*11*/ { OP_ALG_ALGSEL_CRC, __rta_alg_aai_crc, OP_TYPE_CLASS2_ALG },
+ { OP_ALG_ALGSEL_ARC4, NULL, OP_TYPE_CLASS1_ALG },
+ { OP_ALG_ALGSEL_SNOW_F8, __rta_alg_aai_snow_f8, OP_TYPE_CLASS1_ALG },
+/*14*/ { OP_ALG_ALGSEL_KASUMI, __rta_alg_aai_kasumi, OP_TYPE_CLASS1_ALG },
+ { OP_ALG_ALGSEL_SNOW_F9, __rta_alg_aai_snow_f9, OP_TYPE_CLASS2_ALG },
+ { OP_ALG_ALGSEL_ZUCE, __rta_alg_aai_zuce, OP_TYPE_CLASS1_ALG },
+/*17*/ { OP_ALG_ALGSEL_ZUCA, __rta_alg_aai_zuca, OP_TYPE_CLASS2_ALG }
+};
+
+/*
+ * Allowed OPERATION algorithms for each SEC Era.
+ * Values represent the number of entries from alg_table[] that are supported.
+ */
+static const unsigned alg_table_sz[] = {14, 15, 15, 15, 17, 17, 11, 17};
+
+static inline int rta_operation(struct program *program, uint32_t cipher_algo,
+ uint16_t aai, uint8_t algo_state,
+ int icv_checking, int enc)
+{
+ uint32_t opcode = CMD_OPERATION;
+ unsigned i, found = 0;
+ unsigned start_pc = program->current_pc;
+ int ret;
+
+ for (i = 0; i < alg_table_sz[rta_sec_era]; i++) {
+ if (alg_table[i].chipher_algo == cipher_algo) {
+ opcode |= cipher_algo | alg_table[i].class;
+ /* nothing else to verify */
+ if (alg_table[i].aai_func == NULL) {
+ found = 1;
+ break;
+ }
+
+ aai &= OP_ALG_AAI_MASK;
+
+ ret = (*alg_table[i].aai_func)(aai);
+ if (ret < 0) {
+ pr_err("OPERATION: Bad AAI Type. SEC Program Line: %d\n",
+ program->current_pc);
+ goto err;
+ }
+ opcode |= aai;
+ found = 1;
+ break;
+ }
+ }
+ if (!found) {
+ pr_err("OPERATION: Invalid Command. SEC Program Line: %d\n",
+ program->current_pc);
+ ret = -EINVAL;
+ goto err;
+ }
+
+ switch (algo_state) {
+ case OP_ALG_AS_UPDATE:
+ case OP_ALG_AS_INIT:
+ case OP_ALG_AS_FINALIZE:
+ case OP_ALG_AS_INITFINAL:
+ opcode |= algo_state;
+ break;
+ default:
+ pr_err("Invalid Operation Command\n");
+ ret = -EINVAL;
+ goto err;
+ }
+
+ switch (icv_checking) {
+ case ICV_CHECK_DISABLE:
+ /*
+ * opcode |= OP_ALG_ICV_OFF;
+ * OP_ALG_ICV_OFF is 0
+ */
+ break;
+ case ICV_CHECK_ENABLE:
+ opcode |= OP_ALG_ICV_ON;
+ break;
+ default:
+ pr_err("Invalid Operation Command\n");
+ ret = -EINVAL;
+ goto err;
+ }
+
+ switch (enc) {
+ case DIR_DEC:
+ /*
+ * opcode |= OP_ALG_DECRYPT;
+ * OP_ALG_DECRYPT is 0
+ */
+ break;
+ case DIR_ENC:
+ opcode |= OP_ALG_ENCRYPT;
+ break;
+ default:
+ pr_err("Invalid Operation Command\n");
+ ret = -EINVAL;
+ goto err;
+ }
+
+ __rta_out32(program, opcode);
+ program->current_instruction++;
+ return (int)start_pc;
+
+ err:
+ program->first_error_pc = start_pc;
+ return ret;
+}
+
+/*
+ * OPERATION PKHA routines
+ */
+static inline int __rta_pkha_clearmem(uint32_t pkha_op)
+{
+ switch (pkha_op) {
+ case (OP_ALG_PKMODE_CLEARMEM_ALL):
+ case (OP_ALG_PKMODE_CLEARMEM_ABE):
+ case (OP_ALG_PKMODE_CLEARMEM_ABN):
+ case (OP_ALG_PKMODE_CLEARMEM_AB):
+ case (OP_ALG_PKMODE_CLEARMEM_AEN):
+ case (OP_ALG_PKMODE_CLEARMEM_AE):
+ case (OP_ALG_PKMODE_CLEARMEM_AN):
+ case (OP_ALG_PKMODE_CLEARMEM_A):
+ case (OP_ALG_PKMODE_CLEARMEM_BEN):
+ case (OP_ALG_PKMODE_CLEARMEM_BE):
+ case (OP_ALG_PKMODE_CLEARMEM_BN):
+ case (OP_ALG_PKMODE_CLEARMEM_B):
+ case (OP_ALG_PKMODE_CLEARMEM_EN):
+ case (OP_ALG_PKMODE_CLEARMEM_N):
+ case (OP_ALG_PKMODE_CLEARMEM_E):
+ return 0;
+ }
+
+ return -EINVAL;
+}
+
+static inline int __rta_pkha_mod_arithmetic(uint32_t pkha_op)
+{
+ pkha_op &= (uint32_t)~OP_ALG_PKMODE_OUT_A;
+
+ switch (pkha_op) {
+ case (OP_ALG_PKMODE_MOD_ADD):
+ case (OP_ALG_PKMODE_MOD_SUB_AB):
+ case (OP_ALG_PKMODE_MOD_SUB_BA):
+ case (OP_ALG_PKMODE_MOD_MULT):
+ case (OP_ALG_PKMODE_MOD_MULT_IM):
+ case (OP_ALG_PKMODE_MOD_MULT_IM_OM):
+ case (OP_ALG_PKMODE_MOD_EXPO):
+ case (OP_ALG_PKMODE_MOD_EXPO_TEQ):
+ case (OP_ALG_PKMODE_MOD_EXPO_IM):
+ case (OP_ALG_PKMODE_MOD_EXPO_IM_TEQ):
+ case (OP_ALG_PKMODE_MOD_REDUCT):
+ case (OP_ALG_PKMODE_MOD_INV):
+ case (OP_ALG_PKMODE_MOD_MONT_CNST):
+ case (OP_ALG_PKMODE_MOD_CRT_CNST):
+ case (OP_ALG_PKMODE_MOD_GCD):
+ case (OP_ALG_PKMODE_MOD_PRIMALITY):
+ case (OP_ALG_PKMODE_MOD_SML_EXP):
+ case (OP_ALG_PKMODE_F2M_ADD):
+ case (OP_ALG_PKMODE_F2M_MUL):
+ case (OP_ALG_PKMODE_F2M_MUL_IM):
+ case (OP_ALG_PKMODE_F2M_MUL_IM_OM):
+ case (OP_ALG_PKMODE_F2M_EXP):
+ case (OP_ALG_PKMODE_F2M_EXP_TEQ):
+ case (OP_ALG_PKMODE_F2M_AMODN):
+ case (OP_ALG_PKMODE_F2M_INV):
+ case (OP_ALG_PKMODE_F2M_R2):
+ case (OP_ALG_PKMODE_F2M_GCD):
+ case (OP_ALG_PKMODE_F2M_SML_EXP):
+ case (OP_ALG_PKMODE_ECC_F2M_ADD):
+ case (OP_ALG_PKMODE_ECC_F2M_ADD_IM_OM_PROJ):
+ case (OP_ALG_PKMODE_ECC_F2M_DBL):
+ case (OP_ALG_PKMODE_ECC_F2M_DBL_IM_OM_PROJ):
+ case (OP_ALG_PKMODE_ECC_F2M_MUL):
+ case (OP_ALG_PKMODE_ECC_F2M_MUL_TEQ):
+ case (OP_ALG_PKMODE_ECC_F2M_MUL_R2):
+ case (OP_ALG_PKMODE_ECC_F2M_MUL_R2_TEQ):
+ case (OP_ALG_PKMODE_ECC_F2M_MUL_R2_PROJ):
+ case (OP_ALG_PKMODE_ECC_F2M_MUL_R2_PROJ_TEQ):
+ case (OP_ALG_PKMODE_ECC_MOD_ADD):
+ case (OP_ALG_PKMODE_ECC_MOD_ADD_IM_OM_PROJ):
+ case (OP_ALG_PKMODE_ECC_MOD_DBL):
+ case (OP_ALG_PKMODE_ECC_MOD_DBL_IM_OM_PROJ):
+ case (OP_ALG_PKMODE_ECC_MOD_MUL):
+ case (OP_ALG_PKMODE_ECC_MOD_MUL_TEQ):
+ case (OP_ALG_PKMODE_ECC_MOD_MUL_R2):
+ case (OP_ALG_PKMODE_ECC_MOD_MUL_R2_TEQ):
+ case (OP_ALG_PKMODE_ECC_MOD_MUL_R2_PROJ):
+ case (OP_ALG_PKMODE_ECC_MOD_MUL_R2_PROJ_TEQ):
+ return 0;
+ }
+
+ return -EINVAL;
+}
+
+static inline int __rta_pkha_copymem(uint32_t pkha_op)
+{
+ switch (pkha_op) {
+ case (OP_ALG_PKMODE_COPY_NSZ_A0_B0):
+ case (OP_ALG_PKMODE_COPY_NSZ_A0_B1):
+ case (OP_ALG_PKMODE_COPY_NSZ_A0_B2):
+ case (OP_ALG_PKMODE_COPY_NSZ_A0_B3):
+ case (OP_ALG_PKMODE_COPY_NSZ_A1_B0):
+ case (OP_ALG_PKMODE_COPY_NSZ_A1_B1):
+ case (OP_ALG_PKMODE_COPY_NSZ_A1_B2):
+ case (OP_ALG_PKMODE_COPY_NSZ_A1_B3):
+ case (OP_ALG_PKMODE_COPY_NSZ_A2_B0):
+ case (OP_ALG_PKMODE_COPY_NSZ_A2_B1):
+ case (OP_ALG_PKMODE_COPY_NSZ_A2_B2):
+ case (OP_ALG_PKMODE_COPY_NSZ_A2_B3):
+ case (OP_ALG_PKMODE_COPY_NSZ_A3_B0):
+ case (OP_ALG_PKMODE_COPY_NSZ_A3_B1):
+ case (OP_ALG_PKMODE_COPY_NSZ_A3_B2):
+ case (OP_ALG_PKMODE_COPY_NSZ_A3_B3):
+ case (OP_ALG_PKMODE_COPY_NSZ_B0_A0):
+ case (OP_ALG_PKMODE_COPY_NSZ_B0_A1):
+ case (OP_ALG_PKMODE_COPY_NSZ_B0_A2):
+ case (OP_ALG_PKMODE_COPY_NSZ_B0_A3):
+ case (OP_ALG_PKMODE_COPY_NSZ_B1_A0):
+ case (OP_ALG_PKMODE_COPY_NSZ_B1_A1):
+ case (OP_ALG_PKMODE_COPY_NSZ_B1_A2):
+ case (OP_ALG_PKMODE_COPY_NSZ_B1_A3):
+ case (OP_ALG_PKMODE_COPY_NSZ_B2_A0):
+ case (OP_ALG_PKMODE_COPY_NSZ_B2_A1):
+ case (OP_ALG_PKMODE_COPY_NSZ_B2_A2):
+ case (OP_ALG_PKMODE_COPY_NSZ_B2_A3):
+ case (OP_ALG_PKMODE_COPY_NSZ_B3_A0):
+ case (OP_ALG_PKMODE_COPY_NSZ_B3_A1):
+ case (OP_ALG_PKMODE_COPY_NSZ_B3_A2):
+ case (OP_ALG_PKMODE_COPY_NSZ_B3_A3):
+ case (OP_ALG_PKMODE_COPY_NSZ_A_E):
+ case (OP_ALG_PKMODE_COPY_NSZ_A_N):
+ case (OP_ALG_PKMODE_COPY_NSZ_B_E):
+ case (OP_ALG_PKMODE_COPY_NSZ_B_N):
+ case (OP_ALG_PKMODE_COPY_NSZ_N_A):
+ case (OP_ALG_PKMODE_COPY_NSZ_N_B):
+ case (OP_ALG_PKMODE_COPY_NSZ_N_E):
+ case (OP_ALG_PKMODE_COPY_SSZ_A0_B0):
+ case (OP_ALG_PKMODE_COPY_SSZ_A0_B1):
+ case (OP_ALG_PKMODE_COPY_SSZ_A0_B2):
+ case (OP_ALG_PKMODE_COPY_SSZ_A0_B3):
+ case (OP_ALG_PKMODE_COPY_SSZ_A1_B0):
+ case (OP_ALG_PKMODE_COPY_SSZ_A1_B1):
+ case (OP_ALG_PKMODE_COPY_SSZ_A1_B2):
+ case (OP_ALG_PKMODE_COPY_SSZ_A1_B3):
+ case (OP_ALG_PKMODE_COPY_SSZ_A2_B0):
+ case (OP_ALG_PKMODE_COPY_SSZ_A2_B1):
+ case (OP_ALG_PKMODE_COPY_SSZ_A2_B2):
+ case (OP_ALG_PKMODE_COPY_SSZ_A2_B3):
+ case (OP_ALG_PKMODE_COPY_SSZ_A3_B0):
+ case (OP_ALG_PKMODE_COPY_SSZ_A3_B1):
+ case (OP_ALG_PKMODE_COPY_SSZ_A3_B2):
+ case (OP_ALG_PKMODE_COPY_SSZ_A3_B3):
+ case (OP_ALG_PKMODE_COPY_SSZ_B0_A0):
+ case (OP_ALG_PKMODE_COPY_SSZ_B0_A1):
+ case (OP_ALG_PKMODE_COPY_SSZ_B0_A2):
+ case (OP_ALG_PKMODE_COPY_SSZ_B0_A3):
+ case (OP_ALG_PKMODE_COPY_SSZ_B1_A0):
+ case (OP_ALG_PKMODE_COPY_SSZ_B1_A1):
+ case (OP_ALG_PKMODE_COPY_SSZ_B1_A2):
+ case (OP_ALG_PKMODE_COPY_SSZ_B1_A3):
+ case (OP_ALG_PKMODE_COPY_SSZ_B2_A0):
+ case (OP_ALG_PKMODE_COPY_SSZ_B2_A1):
+ case (OP_ALG_PKMODE_COPY_SSZ_B2_A2):
+ case (OP_ALG_PKMODE_COPY_SSZ_B2_A3):
+ case (OP_ALG_PKMODE_COPY_SSZ_B3_A0):
+ case (OP_ALG_PKMODE_COPY_SSZ_B3_A1):
+ case (OP_ALG_PKMODE_COPY_SSZ_B3_A2):
+ case (OP_ALG_PKMODE_COPY_SSZ_B3_A3):
+ case (OP_ALG_PKMODE_COPY_SSZ_A_E):
+ case (OP_ALG_PKMODE_COPY_SSZ_A_N):
+ case (OP_ALG_PKMODE_COPY_SSZ_B_E):
+ case (OP_ALG_PKMODE_COPY_SSZ_B_N):
+ case (OP_ALG_PKMODE_COPY_SSZ_N_A):
+ case (OP_ALG_PKMODE_COPY_SSZ_N_B):
+ case (OP_ALG_PKMODE_COPY_SSZ_N_E):
+ return 0;
+ }
+
+ return -EINVAL;
+}
+
+static inline int rta_pkha_operation(struct program *program, uint32_t op_pkha)
+{
+ uint32_t opcode = CMD_OPERATION | OP_TYPE_PK | OP_ALG_PK;
+ uint32_t pkha_func;
+ unsigned start_pc = program->current_pc;
+ int ret = -EINVAL;
+
+ pkha_func = op_pkha & OP_ALG_PK_FUN_MASK;
+
+ switch (pkha_func) {
+ case (OP_ALG_PKMODE_CLEARMEM):
+ ret = __rta_pkha_clearmem(op_pkha);
+ if (ret < 0) {
+ pr_err("OPERATION PKHA: Type not supported. SEC Program Line: %d\n",
+ program->current_pc);
+ goto err;
+ }
+ break;
+ case (OP_ALG_PKMODE_MOD_ADD):
+ case (OP_ALG_PKMODE_MOD_SUB_AB):
+ case (OP_ALG_PKMODE_MOD_SUB_BA):
+ case (OP_ALG_PKMODE_MOD_MULT):
+ case (OP_ALG_PKMODE_MOD_EXPO):
+ case (OP_ALG_PKMODE_MOD_REDUCT):
+ case (OP_ALG_PKMODE_MOD_INV):
+ case (OP_ALG_PKMODE_MOD_MONT_CNST):
+ case (OP_ALG_PKMODE_MOD_CRT_CNST):
+ case (OP_ALG_PKMODE_MOD_GCD):
+ case (OP_ALG_PKMODE_MOD_PRIMALITY):
+ case (OP_ALG_PKMODE_MOD_SML_EXP):
+ case (OP_ALG_PKMODE_ECC_MOD_ADD):
+ case (OP_ALG_PKMODE_ECC_MOD_DBL):
+ case (OP_ALG_PKMODE_ECC_MOD_MUL):
+ ret = __rta_pkha_mod_arithmetic(op_pkha);
+ if (ret < 0) {
+ pr_err("OPERATION PKHA: Type not supported. SEC Program Line: %d\n",
+ program->current_pc);
+ goto err;
+ }
+ break;
+ case (OP_ALG_PKMODE_COPY_NSZ):
+ case (OP_ALG_PKMODE_COPY_SSZ):
+ ret = __rta_pkha_copymem(op_pkha);
+ if (ret < 0) {
+ pr_err("OPERATION PKHA: Type not supported. SEC Program Line: %d\n",
+ program->current_pc);
+ goto err;
+ }
+ break;
+ default:
+ pr_err("Invalid Operation Command\n");
+ goto err;
+ }
+
+ opcode |= op_pkha;
+
+ __rta_out32(program, opcode);
+ program->current_instruction++;
+ return (int)start_pc;
+
+ err:
+ program->first_error_pc = start_pc;
+ program->current_instruction++;
+ return ret;
+}
+
+#endif /* __RTA_OPERATION_CMD_H__ */
diff --git a/drivers/crypto/caam/flib/rta/seq_in_out_ptr_cmd.h b/drivers/crypto/caam/flib/rta/seq_in_out_ptr_cmd.h
new file mode 100644
index 000000000000..ccb70a71b394
--- /dev/null
+++ b/drivers/crypto/caam/flib/rta/seq_in_out_ptr_cmd.h
@@ -0,0 +1,168 @@
+/* Copyright 2008-2013 Freescale Semiconductor, Inc. */
+
+#ifndef __RTA_SEQ_IN_OUT_PTR_CMD_H__
+#define __RTA_SEQ_IN_OUT_PTR_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+/* Allowed SEQ IN PTR flags for each SEC Era. */
+static const uint32_t seq_in_ptr_flags[] = {
+ RBS | INL | SGF | PRE | EXT | RTO,
+ RBS | INL | SGF | PRE | EXT | RTO | RJD,
+ RBS | INL | SGF | PRE | EXT | RTO | RJD,
+ RBS | INL | SGF | PRE | EXT | RTO | RJD,
+ RBS | INL | SGF | PRE | EXT | RTO | RJD | SOP,
+ RBS | INL | SGF | PRE | EXT | RTO | RJD | SOP,
+ RBS | INL | SGF | PRE | EXT | RTO | RJD | SOP,
+ RBS | INL | SGF | PRE | EXT | RTO | RJD | SOP
+};
+
+/* Allowed SEQ OUT PTR flags for each SEC Era. */
+static const uint32_t seq_out_ptr_flags[] = {
+ SGF | PRE | EXT,
+ SGF | PRE | EXT | RTO,
+ SGF | PRE | EXT | RTO,
+ SGF | PRE | EXT | RTO,
+ SGF | PRE | EXT | RTO | RST | EWS,
+ SGF | PRE | EXT | RTO | RST | EWS,
+ SGF | PRE | EXT | RTO | RST | EWS,
+ SGF | PRE | EXT | RTO | RST | EWS
+};
+
+static inline int rta_seq_in_ptr(struct program *program, uint64_t src,
+ uint32_t length, uint32_t flags)
+{
+ uint32_t opcode = CMD_SEQ_IN_PTR;
+ unsigned start_pc = program->current_pc;
+ int ret = -EINVAL;
+
+ /* Parameters checking */
+ if ((flags & RTO) && (flags & PRE)) {
+ pr_err("SEQ IN PTR: Invalid usage of RTO and PRE flags\n");
+ goto err;
+ }
+ if (flags & ~seq_in_ptr_flags[rta_sec_era]) {
+ pr_err("SEQ IN PTR: Flag(s) not supported by SEC Era %d\n",
+ USER_SEC_ERA(rta_sec_era));
+ goto err;
+ }
+ if ((flags & INL) && (flags & RJD)) {
+ pr_err("SEQ IN PTR: Invalid usage of INL and RJD flags\n");
+ goto err;
+ }
+ if ((src) && (flags & (SOP | RTO | PRE))) {
+ pr_err("SEQ IN PTR: Invalid usage of RTO or PRE flag\n");
+ goto err;
+ }
+ if ((flags & SOP) && (flags & (RBS | PRE | RTO | EXT))) {
+ pr_err("SEQ IN PTR: Invalid usage of SOP and (RBS or PRE or RTO or EXT) flags\n");
+ goto err;
+ }
+
+ /* write flag fields */
+ if (flags & RBS)
+ opcode |= SQIN_RBS;
+ if (flags & INL)
+ opcode |= SQIN_INL;
+ if (flags & SGF)
+ opcode |= SQIN_SGF;
+ if (flags & PRE)
+ opcode |= SQIN_PRE;
+ if (flags & RTO)
+ opcode |= SQIN_RTO;
+ if (flags & RJD)
+ opcode |= SQIN_RJD;
+ if (flags & SOP)
+ opcode |= SQIN_SOP;
+ if ((length >> 16) || (flags & EXT)) {
+ if (flags & SOP) {
+ pr_err("SEQ IN PTR: Invalid usage of SOP and EXT flags\n");
+ goto err;
+ }
+
+ opcode |= SQIN_EXT;
+ } else {
+ opcode |= length & SQIN_LEN_MASK;
+ }
+
+ __rta_out32(program, opcode);
+ program->current_instruction++;
+
+ /* write pointer or immediate data field */
+ if (!(opcode & (SQIN_PRE | SQIN_RTO | SQIN_SOP)))
+ __rta_out64(program, program->ps, src);
+
+ /* write extended length field */
+ if (opcode & SQIN_EXT)
+ __rta_out32(program, length);
+
+ return (int)start_pc;
+
+ err:
+ program->first_error_pc = start_pc;
+ program->current_instruction++;
+ return ret;
+}
+
+static inline int rta_seq_out_ptr(struct program *program, uint64_t dst,
+ uint32_t length, uint32_t flags)
+{
+ uint32_t opcode = CMD_SEQ_OUT_PTR;
+ unsigned start_pc = program->current_pc;
+ int ret = -EINVAL;
+
+ /* Parameters checking */
+ if (flags & ~seq_out_ptr_flags[rta_sec_era]) {
+ pr_err("SEQ OUT PTR: Flag(s) not supported by SEC Era %d\n",
+ USER_SEC_ERA(rta_sec_era));
+ goto err;
+ }
+ if ((flags & RTO) && (flags & PRE)) {
+ pr_err("SEQ OUT PTR: Invalid usage of RTO and PRE flags\n");
+ goto err;
+ }
+ if ((dst) && (flags & (RTO | PRE))) {
+ pr_err("SEQ OUT PTR: Invalid usage of RTO or PRE flag\n");
+ goto err;
+ }
+ if ((flags & RST) && !(flags & RTO)) {
+ pr_err("SEQ OUT PTR: RST flag must be used with RTO flag\n");
+ goto err;
+ }
+
+ /* write flag fields */
+ if (flags & SGF)
+ opcode |= SQOUT_SGF;
+ if (flags & PRE)
+ opcode |= SQOUT_PRE;
+ if (flags & RTO)
+ opcode |= SQOUT_RTO;
+ if (flags & RST)
+ opcode |= SQOUT_RST;
+ if (flags & EWS)
+ opcode |= SQOUT_EWS;
+ if ((length >> 16) || (flags & EXT))
+ opcode |= SQOUT_EXT;
+ else
+ opcode |= length & SQOUT_LEN_MASK;
+
+ __rta_out32(program, opcode);
+ program->current_instruction++;
+
+ /* write pointer or immediate data field */
+ if (!(opcode & (SQOUT_PRE | SQOUT_RTO)))
+ __rta_out64(program, program->ps, dst);
+
+ /* write extended length field */
+ if (opcode & SQOUT_EXT)
+ __rta_out32(program, length);
+
+ return (int)start_pc;
+
+ err:
+ program->first_error_pc = start_pc;
+ program->current_instruction++;
+ return ret;
+}
+
+#endif /* __RTA_SEQ_IN_OUT_PTR_CMD_H__ */
diff --git a/drivers/crypto/caam/flib/rta/signature_cmd.h b/drivers/crypto/caam/flib/rta/signature_cmd.h
new file mode 100644
index 000000000000..c6765fba8c5e
--- /dev/null
+++ b/drivers/crypto/caam/flib/rta/signature_cmd.h
@@ -0,0 +1,36 @@
+/* Copyright 2008-2013 Freescale Semiconductor, Inc. */
+
+#ifndef __RTA_SIGNATURE_CMD_H__
+#define __RTA_SIGNATURE_CMD_H__
+
+static inline int rta_signature(struct program *program, uint32_t sign_type)
+{
+ uint32_t opcode = CMD_SIGNATURE;
+ unsigned start_pc = program->current_pc;
+
+ switch (sign_type) {
+ case (SIGN_TYPE_FINAL):
+ case (SIGN_TYPE_FINAL_RESTORE):
+ case (SIGN_TYPE_FINAL_NONZERO):
+ case (SIGN_TYPE_IMM_2):
+ case (SIGN_TYPE_IMM_3):
+ case (SIGN_TYPE_IMM_4):
+ opcode |= sign_type;
+ break;
+ default:
+ pr_err("SIGNATURE Command: Invalid type selection\n");
+ goto err;
+ }
+
+ __rta_out32(program, opcode);
+ program->current_instruction++;
+
+ return (int)start_pc;
+
+ err:
+ program->first_error_pc = start_pc;
+ program->current_instruction++;
+ return -EINVAL;
+}
+
+#endif /* __RTA_SIGNATURE_CMD_H__ */
diff --git a/drivers/crypto/caam/flib/rta/store_cmd.h b/drivers/crypto/caam/flib/rta/store_cmd.h
new file mode 100644
index 000000000000..d3edf4077a53
--- /dev/null
+++ b/drivers/crypto/caam/flib/rta/store_cmd.h
@@ -0,0 +1,145 @@
+/* Copyright 2008-2013 Freescale Semiconductor, Inc. */
+
+#ifndef __RTA_STORE_CMD_H__
+#define __RTA_STORE_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+static const uint32_t store_src_table[][2] = {
+/*1*/ { KEY1SZ, LDST_CLASS_1_CCB | LDST_SRCDST_WORD_KEYSZ_REG },
+ { KEY2SZ, LDST_CLASS_2_CCB | LDST_SRCDST_WORD_KEYSZ_REG },
+ { DJQDA, LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_JQDAR },
+ { MODE1, LDST_CLASS_1_CCB | LDST_SRCDST_WORD_MODE_REG },
+ { MODE2, LDST_CLASS_2_CCB | LDST_SRCDST_WORD_MODE_REG },
+ { DJQCTRL, LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_JQCTRL },
+ { DATA1SZ, LDST_CLASS_1_CCB | LDST_SRCDST_WORD_DATASZ_REG },
+ { DATA2SZ, LDST_CLASS_2_CCB | LDST_SRCDST_WORD_DATASZ_REG },
+ { DSTAT, LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_STAT },
+ { ICV1SZ, LDST_CLASS_1_CCB | LDST_SRCDST_WORD_ICVSZ_REG },
+ { ICV2SZ, LDST_CLASS_2_CCB | LDST_SRCDST_WORD_ICVSZ_REG },
+ { DPID, LDST_CLASS_DECO | LDST_SRCDST_WORD_PID },
+ { CCTRL, LDST_SRCDST_WORD_CHACTRL },
+ { ICTRL, LDST_SRCDST_WORD_IRQCTRL },
+ { CLRW, LDST_SRCDST_WORD_CLRW },
+ { MATH0, LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_MATH0 },
+ { CSTAT, LDST_SRCDST_WORD_STAT },
+ { MATH1, LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_MATH1 },
+ { MATH2, LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_MATH2 },
+ { AAD1SZ, LDST_CLASS_1_CCB | LDST_SRCDST_WORD_DECO_AAD_SZ },
+ { MATH3, LDST_CLASS_DECO | LDST_SRCDST_WORD_DECO_MATH3 },
+ { IV1SZ, LDST_CLASS_1_CCB | LDST_SRCDST_WORD_CLASS1_IV_SZ },
+ { PKASZ, LDST_CLASS_1_CCB | LDST_SRCDST_WORD_PKHA_A_SZ },
+ { PKBSZ, LDST_CLASS_1_CCB | LDST_SRCDST_WORD_PKHA_B_SZ },
+ { PKESZ, LDST_CLASS_1_CCB | LDST_SRCDST_WORD_PKHA_E_SZ },
+ { PKNSZ, LDST_CLASS_1_CCB | LDST_SRCDST_WORD_PKHA_N_SZ },
+ { CONTEXT1, LDST_CLASS_1_CCB | LDST_SRCDST_BYTE_CONTEXT },
+ { CONTEXT2, LDST_CLASS_2_CCB | LDST_SRCDST_BYTE_CONTEXT },
+ { DESCBUF, LDST_CLASS_DECO | LDST_SRCDST_WORD_DESCBUF },
+/*30*/ { JOBDESCBUF, LDST_CLASS_DECO | LDST_SRCDST_WORD_DESCBUF_JOB },
+ { SHAREDESCBUF, LDST_CLASS_DECO | LDST_SRCDST_WORD_DESCBUF_SHARED },
+/*32*/ { JOBDESCBUF_EFF, LDST_CLASS_DECO |
+ LDST_SRCDST_WORD_DESCBUF_JOB_WE },
+ { SHAREDESCBUF_EFF, LDST_CLASS_DECO |
+ LDST_SRCDST_WORD_DESCBUF_SHARED_WE },
+/*34*/ { GTR, LDST_CLASS_DECO | LDST_SRCDST_WORD_GTR },
+ { STR, LDST_CLASS_DECO | LDST_SRCDST_WORD_STR }
+};
+
+/*
+ * Allowed STORE sources for each SEC ERA.
+ * Values represent the number of entries from source_src_table[] that are
+ * supported.
+ */
+static const unsigned store_src_table_sz[] = {29, 31, 33, 33, 33, 33, 35, 35};
+
+static inline int rta_store(struct program *program, uint64_t src,
+ uint16_t offset, uint64_t dst, uint32_t length,
+ uint32_t flags)
+{
+ uint32_t opcode = 0, val;
+ int ret = -EINVAL;
+ unsigned start_pc = program->current_pc;
+
+ if (flags & SEQ)
+ opcode = CMD_SEQ_STORE;
+ else
+ opcode = CMD_STORE;
+
+ /* parameters check */
+ if ((flags & IMMED) && (flags & SGF)) {
+ pr_err("STORE: Invalid flag. SEC PC: %d; Instr: %d\n",
+ program->current_pc, program->current_instruction);
+ goto err;
+ }
+ if ((flags & IMMED) && (offset != 0)) {
+ pr_err("STORE: Invalid flag. SEC PC: %d; Instr: %d\n",
+ program->current_pc, program->current_instruction);
+ goto err;
+ }
+
+ if ((flags & SEQ) && ((src == JOBDESCBUF) || (src == SHAREDESCBUF) ||
+ (src == JOBDESCBUF_EFF) ||
+ (src == SHAREDESCBUF_EFF))) {
+ pr_err("STORE: Invalid SRC type. SEC PC: %d; Instr: %d\n",
+ program->current_pc, program->current_instruction);
+ goto err;
+ }
+
+ if (flags & IMMED)
+ opcode |= LDST_IMM;
+
+ if ((flags & SGF) || (flags & VLF))
+ opcode |= LDST_VLF;
+
+ /*
+ * source for data to be stored can be specified as:
+ * - register location; set in src field[9-15];
+ * - if IMMED flag is set, data is set in value field [0-31];
+ * user can give this value as actual value or pointer to data
+ */
+ if (!(flags & IMMED)) {
+ ret = __rta_map_opcode((uint32_t)src, store_src_table,
+ store_src_table_sz[rta_sec_era], &val);
+ if (ret < 0) {
+ pr_err("STORE: Invalid source. SEC PC: %d; Instr: %d\n",
+ program->current_pc,
+ program->current_instruction);
+ goto err;
+ }
+ opcode |= val;
+ }
+
+ /* DESC BUFFER: length / offset values are specified in 4-byte words */
+ if ((src == DESCBUF) || (src == JOBDESCBUF) || (src == SHAREDESCBUF) ||
+ (src == JOBDESCBUF_EFF) || (src == SHAREDESCBUF_EFF)) {
+ opcode |= (length >> 2);
+ opcode |= (uint32_t)((offset >> 2) << LDST_OFFSET_SHIFT);
+ } else {
+ opcode |= length;
+ opcode |= (uint32_t)(offset << LDST_OFFSET_SHIFT);
+ }
+
+ __rta_out32(program, opcode);
+ program->current_instruction++;
+
+ if ((src == JOBDESCBUF) || (src == SHAREDESCBUF) ||
+ (src == JOBDESCBUF_EFF) || (src == SHAREDESCBUF_EFF))
+ return (int)start_pc;
+
+ /* for STORE, a pointer to where the data will be stored if needed */
+ if (!(flags & SEQ))
+ __rta_out64(program, program->ps, dst);
+
+ /* for IMMED data, place the data here */
+ if (flags & IMMED)
+ __rta_inline_data(program, src, flags & __COPY_MASK, length);
+
+ return (int)start_pc;
+
+ err:
+ program->first_error_pc = start_pc;
+ program->current_instruction++;
+ return ret;
+}
+
+#endif /* __RTA_STORE_CMD_H__ */
--
1.8.3.1
Horia Geanta
2014-08-14 12:54:28 UTC
Permalink
Add headers defining the RTA API.

Signed-off-by: Horia Geanta <***@freescale.com>
Signed-off-by: Carmen Iorga <***@freescale.com>
---
drivers/crypto/caam/flib/rta.h | 980 ++++++++++++++++++++++++
drivers/crypto/caam/flib/rta/protocol_cmd.h | 595 ++++++++++++++
drivers/crypto/caam/flib/rta/sec_run_time_asm.h | 672 ++++++++++++++++
3 files changed, 2247 insertions(+)
create mode 100644 drivers/crypto/caam/flib/rta.h
create mode 100644 drivers/crypto/caam/flib/rta/protocol_cmd.h
create mode 100644 drivers/crypto/caam/flib/rta/sec_run_time_asm.h

diff --git a/drivers/crypto/caam/flib/rta.h b/drivers/crypto/caam/flib/rta.h
new file mode 100644
index 000000000000..831163fca3bd
--- /dev/null
+++ b/drivers/crypto/caam/flib/rta.h
@@ -0,0 +1,980 @@
+/* Copyright 2008-2013 Freescale Semiconductor, Inc. */
+
+#ifndef __RTA_RTA_H__
+#define __RTA_RTA_H__
+
+#include "rta/sec_run_time_asm.h"
+#include "rta/fifo_load_store_cmd.h"
+#include "rta/header_cmd.h"
+#include "rta/jump_cmd.h"
+#include "rta/key_cmd.h"
+#include "rta/load_cmd.h"
+#include "rta/math_cmd.h"
+#include "rta/move_cmd.h"
+#include "rta/nfifo_cmd.h"
+#include "rta/operation_cmd.h"
+#include "rta/protocol_cmd.h"
+#include "rta/seq_in_out_ptr_cmd.h"
+#include "rta/signature_cmd.h"
+#include "rta/store_cmd.h"
+
+/**
+ * DOC: About
+ *
+ * RTA (Runtime Assembler) Library is an easy and flexible runtime method for
+ * writing SEC descriptors. It implements a thin abstraction layer above
+ * SEC commands set; the resulting code is compact and similar to a
+ * descriptor sequence.
+ *
+ * RTA library improves comprehension of the SEC code, adds flexibility for
+ * writing complex descriptors and keeps the code lightweight. Should be used
+ * by whom needs to encode descriptors at runtime, with comprehensible flow
+ * control in descriptor.
+ */
+
+/**
+ * DOC: Usage
+ *
+ * RTA is used in kernel space by the SEC / CAAM (Cryptographic Acceleration and
+ * Assurance Module) kernel module (drivers/crypto/caam) and SEC / CAAM QI
+ * kernel module (Freescale QorIQ SDK).
+ *
+ * RTA is used in user space by USDPAA - User Space DataPath Acceleration
+ * Architecture (Freescale QorIQ SDK).
+ */
+
+/**
+ * DOC: Descriptor Buffer Management Routines
+ *
+ * Contains details of RTA descriptor buffer management and SEC Era
+ * management routines.
+ */
+
+/**
+ * PROGRAM_CNTXT_INIT - must be called before any descriptor run-time assembly
+ * call type field carry info i.e. whether descriptor is
+ * shared or job descriptor.
+ * @program: pointer to struct program
+ * @buffer: input buffer where the descriptor will be placed (uint32_t *)
+ * @offset: offset in input buffer from where the data will be written
+ * (unsigned)
+ */
+#define PROGRAM_CNTXT_INIT(program, buffer, offset) \
+ rta_program_cntxt_init(program, buffer, offset)
+
+/**
+ * PROGRAM_FINALIZE - must be called to mark completion of RTA call.
+ * @program: pointer to struct program
+ *
+ * Return: total size of the descriptor in words (unsigned).
+ */
+#define PROGRAM_FINALIZE(program) rta_program_finalize(program)
+
+/**
+ * PROGRAM_SET_36BIT_ADDR - must be called to set pointer size to 36 bits
+ * @program: pointer to struct program
+ *
+ * Return: current size of the descriptor in words (unsigned).
+ */
+#define PROGRAM_SET_36BIT_ADDR(program) rta_program_set_36bit_addr(program)
+
+/**
+ * PROGRAM_SET_BSWAP - must be called to enable byte swapping
+ * @program: pointer to struct program
+ *
+ * Byte swapping on a 4-byte boundary will be performed at the end - when
+ * calling PROGRAM_FINALIZE().
+ *
+ * Return: current size of the descriptor in words (unsigned).
+ */
+#define PROGRAM_SET_BSWAP(program) rta_program_set_bswap(program)
+
+/**
+ * WORD - must be called to insert in descriptor buffer a 32bit value
+ * @program: pointer to struct program
+ * @val: input value to be written in descriptor buffer (uint32_t)
+ *
+ * Return: the descriptor buffer offset where this command is inserted
+ * (unsigned).
+ */
+#define WORD(program, val) rta_word(program, val)
+
+/**
+ * DWORD - must be called to insert in descriptor buffer a 64bit value
+ * @program: pointer to struct program
+ * @val: input value to be written in descriptor buffer (uint64_t)
+ *
+ * Return: the descriptor buffer offset where this command is inserted
+ * (unsigned).
+ */
+#define DWORD(program, val) rta_dword(program, val)
+
+/**
+ * COPY_DATA - must be called to insert in descriptor buffer data larger than
+ * 64bits.
+ * @program: pointer to struct program
+ * @data: input data to be written in descriptor buffer (uint8_t *)
+ * @len: length of input data (unsigned)
+ *
+ * Return: the descriptor buffer offset where this command is inserted
+ * (unsigned).
+ */
+#define COPY_DATA(program, data, len) rta_copy_data(program, (data), (len))
+
+/**
+ * DESC_LEN - determines job / shared descriptor buffer length (in words)
+ * @buffer: descriptor buffer (uint32_t *)
+ *
+ * Return: descriptor buffer length in words (unsigned).
+ */
+#define DESC_LEN(buffer) rta_desc_len(buffer)
+
+/**
+ * DESC_BYTES - determines job / shared descriptor buffer length (in bytes)
+ * @buffer: descriptor buffer (uint32_t *)
+ *
+ * Return: descriptor buffer length in bytes (unsigned).
+ */
+#define DESC_BYTES(buffer) rta_desc_bytes(buffer)
+
+/*
+ * SEC HW block revision.
+ *
+ * This *must not be confused with SEC version*:
+ * - SEC HW block revision format is "v"
+ * - SEC revision format is "x.y"
+ */
+extern enum rta_sec_era rta_sec_era;
+
+/**
+ * rta_set_sec_era - Set SEC Era HW block revision for which the RTA library
+ * will generate the descriptors.
+ * @era: SEC Era (enum rta_sec_era)
+ *
+ * Return: 0 if the ERA was set successfully, -1 otherwise (int)
+ *
+ * Warning 1: Must be called *only once*, *before* using any other RTA API
+ * routine.
+ *
+ * Warning 2: *Not thread safe*.
+ */
+static inline int rta_set_sec_era(enum rta_sec_era era)
+{
+ if (era > MAX_SEC_ERA) {
+ rta_sec_era = DEFAULT_SEC_ERA;
+ pr_err("Unsupported SEC ERA. Defaulting to ERA %d\n",
+ DEFAULT_SEC_ERA + 1);
+ return -1;
+ }
+
+ rta_sec_era = era;
+ return 0;
+}
+
+/**
+ * rta_get_sec_era - Get SEC Era HW block revision for which the RTA library
+ * will generate the descriptors.
+ *
+ * Return: SEC Era (unsigned).
+ */
+static inline unsigned rta_get_sec_era(void)
+{
+ return rta_sec_era;
+}
+
+/**
+ * DOC: SEC Commands Routines
+ *
+ * Contains details of RTA wrapper routines over SEC engine commands.
+ */
+
+/**
+ * SHR_HDR - Configures Shared Descriptor HEADER command
+ * @program: pointer to struct program
+ * @share: descriptor share state (enum rta_share_type)
+ * @start_idx: index in descriptor buffer where the execution of the shared
+ * descriptor should start (@c unsigned).
+ * @flags: operational flags: RIF, DNR, CIF, SC, PD
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ * On error, a negative error code; first error program counter will
+ * point to offset in descriptor buffer where the instruction should
+ * have been written.
+ */
+#define SHR_HDR(program, share, start_idx, flags) \
+ rta_shr_header(program, share, start_idx, flags)
+
+/**
+ * JOB_HDR - Configures JOB Descriptor HEADER command
+ * @program: pointer to struct program
+ * @share: descriptor share state (enum rta_share_type)
+ * @start_idx: index in descriptor buffer where the execution of the job
+ * descriptor should start (unsigned). In case SHR bit is present
+ * in flags, this will be the shared descriptor length.
+ * @share_desc: pointer to shared descriptor, in case SHR bit is set (uint64_t)
+ * @flags: operational flags: RSMS, DNR, TD, MTD, REO, SHR
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ * On error, a negative error code; first error program counter will
+ * point to offset in descriptor buffer where the instruction should
+ * have been written.
+ */
+#define JOB_HDR(program, share, start_idx, share_desc, flags) \
+ rta_job_header(program, share, start_idx, share_desc, flags, 0)
+
+/**
+ * JOB_HDR_EXT - Configures JOB Descriptor HEADER command
+ * @program: pointer to struct program
+ * @share: descriptor share state (enum rta_share_type)
+ * @start_idx: index in descriptor buffer where the execution of the job
+ * descriptor should start (unsigned). In case SHR bit is present
+ * in flags, this will be the shared descriptor length.
+ * @share_desc: pointer to shared descriptor, in case SHR bit is set (uint64_t)
+ * @flags: operational flags: RSMS, DNR, TD, MTD, REO, SHR
+ * @ext_flags: extended header flags: DSV (DECO Select Valid), DECO Id (limited
+ * by DSEL_MASK).
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ * On error, a negative error code; first error program counter will
+ * point to offset in descriptor buffer where the instruction should
+ * have been written.
+ */
+#define JOB_HDR_EXT(program, share, start_idx, share_desc, flags, ext_flags) \
+ rta_job_header(program, share, start_idx, share_desc, flags | EXT, \
+ ext_flags)
+
+/**
+ * MOVE - Configures MOVE and MOVE_LEN commands
+ * @program: pointer to struct program
+ * @src: internal source of data that will be moved: CONTEXT1, CONTEXT2, OFIFO,
+ * DESCBUF, MATH0-MATH3, IFIFOABD, IFIFOAB1, IFIFOAB2, AB1, AB2, ABD.
+ * @src_offset: offset in source data (uint16_t)
+ * @dst: internal destination of data that will be moved: CONTEXT1, CONTEXT2,
+ * OFIFO, DESCBUF, MATH0-MATH3, IFIFOAB1, IFIFOAB2, IFIFO, PKA, KEY1,
+ * KEY2, ALTSOURCE.
+ * @dst_offset: offset in destination data (uint16_t)
+ * @length: size of data to be moved: for MOVE must be specified as immediate
+ * value and IMMED flag must be set; for MOVE_LEN must be specified
+ * using MATH0-MATH3.
+ * @opt: operational flags: WAITCOMP, FLUSH1, FLUSH2, LAST1, LAST2, SIZE_WORD,
+ * SIZE_BYTE, SIZE_DWORD, IMMED (not valid for MOVE_LEN).
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ * On error, a negative error code; first error program counter will
+ * point to offset in descriptor buffer where the instruction should
+ * have been written.
+ */
+#define MOVE(program, src, src_offset, dst, dst_offset, length, opt) \
+ rta_move(program, __MOVE, src, src_offset, dst, dst_offset, length, opt)
+
+/**
+ * MOVEB - Configures MOVEB command
+ * @program: pointer to struct program
+ * @src: internal source of data that will be moved: CONTEXT1, CONTEXT2, OFIFO,
+ * DESCBUF, MATH0-MATH3, IFIFOABD, IFIFOAB1, IFIFOAB2, AB1, AB2, ABD.
+ * @src_offset: offset in source data (uint16_t)
+ * @dst: internal destination of data that will be moved: CONTEXT1, CONTEXT2,
+ * OFIFO, DESCBUF, MATH0-MATH3, IFIFOAB1, IFIFOAB2, IFIFO, PKA, KEY1,
+ * KEY2, ALTSOURCE.
+ * @dst_offset: offset in destination data (uint16_t)
+ * @length: size of data to be moved: for MOVE must be specified as immediate
+ * value and IMMED flag must be set; for MOVE_LEN must be specified
+ * using MATH0-MATH3.
+ * @opt: operational flags: WAITCOMP, FLUSH1, FLUSH2, LAST1, LAST2, SIZE_WORD,
+ * SIZE_BYTE, SIZE_DWORD, IMMED (not valid for MOVE_LEN).
+ *
+ * Identical with MOVE command if byte swapping not enabled; else - when src/dst
+ * is descriptor buffer or MATH registers, data type is byte array when MOVE
+ * data type is 4-byte array and vice versa.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ * On error, a negative error code; first error program counter will
+ * point to offset in descriptor buffer where the instruction should
+ * have been written.
+ */
+#define MOVEB(program, src, src_offset, dst, dst_offset, length, opt) \
+ rta_move(program, __MOVEB, src, src_offset, dst, dst_offset, length, \
+ opt)
+
+/**
+ * MOVEDW - Configures MOVEDW command
+ * @program: pointer to struct program
+ * @src: internal source of data that will be moved: CONTEXT1, CONTEXT2, OFIFO,
+ * DESCBUF, MATH0-MATH3, IFIFOABD, IFIFOAB1, IFIFOAB2, AB1, AB2, ABD.
+ * @src_offset: offset in source data (uint16_t)
+ * @dst: internal destination of data that will be moved: CONTEXT1, CONTEXT2,
+ * OFIFO, DESCBUF, MATH0-MATH3, IFIFOAB1, IFIFOAB2, IFIFO, PKA, KEY1,
+ * KEY2, ALTSOURCE.
+ * @dst_offset: offset in destination data (uint16_t)
+ * @length: size of data to be moved: for MOVE must be specified as immediate
+ * value and IMMED flag must be set; for MOVE_LEN must be specified
+ * using MATH0-MATH3.
+ * @opt: operational flags: WAITCOMP, FLUSH1, FLUSH2, LAST1, LAST2, SIZE_WORD,
+ * SIZE_BYTE, SIZE_DWORD, IMMED (not valid for MOVE_LEN).
+ *
+ * Identical with MOVE command, with the following differences: data type is
+ * 8-byte array; word swapping is performed when SEC is programmed in little
+ * endian mode.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ * On error, a negative error code; first error program counter will
+ * point to offset in descriptor buffer where the instruction should
+ * have been written.
+ */
+#define MOVEDW(program, src, src_offset, dst, dst_offset, length, opt) \
+ rta_move(program, __MOVEDW, src, src_offset, dst, dst_offset, length, \
+ opt)
+
+/**
+ * FIFOLOAD - Configures FIFOLOAD command to load message data, PKHA data, IV,
+ * ICV, AAD and bit length message data into Input Data FIFO.
+ * @program: pointer to struct program
+ * @data: input data type to store: PKHA registers, IFIFO, MSG1, MSG2,
+ * MSGOUTSNOOP, MSGINSNOOP, IV1, IV2, AAD1, ICV1, ICV2, BIT_DATA, SKIP.
+ * @src: pointer or actual data in case of immediate load; IMMED, COPY and DCOPY
+ * flags indicate action taken (inline imm data, inline ptr, inline from
+ * ptr).
+ * @length: number of bytes to load (uint32_t)
+ * @flags: operational flags: SGF, IMMED, EXT, CLASS1, CLASS2, BOTH, FLUSH1,
+ * LAST1, LAST2, COPY, DCOPY.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ * On error, a negative error code; first error program counter will
+ * point to offset in descriptor buffer where the instruction should
+ * have been written.
+ */
+#define FIFOLOAD(program, data, src, length, flags) \
+ rta_fifo_load(program, data, src, length, flags)
+
+/**
+ * SEQFIFOLOAD - Configures SEQ FIFOLOAD command to load message data, PKHA
+ * data, IV, ICV, AAD and bit length message data into Input Data
+ * FIFO.
+ * @program: pointer to struct program
+ * @data: input data type to store: PKHA registers, IFIFO, MSG1, MSG2,
+ * MSGOUTSNOOP, MSGINSNOOP, IV1, IV2, AAD1, ICV1, ICV2, BIT_DATA, SKIP.
+ * @length: number of bytes to load; can be set to 0 for SEQ command w/ VLF set
+ * (uint32_t).
+ * @flags: operational flags: VLF, CLASS1, CLASS2, BOTH, FLUSH1, LAST1, LAST2,
+ * AIDF.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ * On error, a negative error code; first error program counter will
+ * point to offset in descriptor buffer where the instruction should
+ * have been written.
+ */
+#define SEQFIFOLOAD(program, data, length, flags) \
+ rta_fifo_load(program, data, NONE, length, flags|SEQ)
+
+/**
+ * FIFOSTORE - Configures FIFOSTORE command, to move data from Output Data FIFO
+ * to external memory via DMA.
+ * @program: pointer to struct program
+ * @data: output data type to store: PKHA registers, IFIFO, OFIFO, RNG,
+ * RNGOFIFO, AFHA_SBOX, MDHA_SPLIT_KEY, MSG, KEY1, KEY2, SKIP.
+ * @encrypt_flags: store data encryption mode: EKT, TK
+ * @dst: pointer to store location (uint64_t)
+ * @length: number of bytes to load (uint32_t)
+ * @flags: operational flags: SGF, CONT, EXT, CLASS1, CLASS2, BOTH
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ * On error, a negative error code; first error program counter will
+ * point to offset in descriptor buffer where the instruction should
+ * have been written.
+ */
+#define FIFOSTORE(program, data, encrypt_flags, dst, length, flags) \
+ rta_fifo_store(program, data, encrypt_flags, dst, length, flags)
+
+/**
+ * SEQFIFOSTORE - Configures SEQ FIFOSTORE command, to move data from Output
+ * Data FIFO to external memory via DMA.
+ * @program: pointer to struct program
+ * @data: output data type to store: PKHA registers, IFIFO, OFIFO, RNG,
+ * RNGOFIFO, AFHA_SBOX, MDHA_SPLIT_KEY, MSG, KEY1, KEY2, METADATA, SKIP.
+ * @encrypt_flags: store data encryption mode: EKT, TK
+ * @length: number of bytes to load; can be set to 0 for SEQ command w/ VLF set
+ * (uint32_t).
+ * @flags: operational flags: VLF, CONT, EXT, CLASS1, CLASS2, BOTH
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ * On error, a negative error code; first error program counter will
+ * point to offset in descriptor buffer where the instruction should
+ * have been written.
+ */
+#define SEQFIFOSTORE(program, data, encrypt_flags, length, flags) \
+ rta_fifo_store(program, data, encrypt_flags, 0, length, flags|SEQ)
+
+/**
+ * KEY - Configures KEY and SEQ KEY commands
+ * @program: pointer to struct program
+ * @key_dst: key store location: KEY1, KEY2, PKE, AFHA_SBOX, MDHA_SPLIT_KEY
+ * @encrypt_flags: key encryption mode: ENC, EKT, TK, NWB, PTS
+ * @src: pointer or actual data in case of immediate load (uint64_t); IMMED,
+ * COPY and DCOPY flags indicate action taken (inline imm data,
+ * inline ptr, inline from ptr).
+ * @length: number of bytes to load; can be set to 0 for SEQ command w/ VLF set
+ * (uint32_t).
+ * @flags: operational flags: for KEY: SGF, IMMED, COPY, DCOPY; for SEQKEY: SEQ,
+ * VLF, AIDF.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ * On error, a negative error code; first error program counter will
+ * point to offset in descriptor buffer where the instruction should
+ * have been written.
+ */
+#define KEY(program, key_dst, encrypt_flags, src, length, flags) \
+ rta_key(program, key_dst, encrypt_flags, src, length, flags)
+
+/**
+ * SEQINPTR - Configures SEQ IN PTR command
+ * @program: pointer to struct program
+ * @src: starting address for Input Sequence (uint64_t)
+ * @length: number of bytes in (or to be added to) Input Sequence (uint32_t)
+ * @flags: operational flags: RBS, INL, SGF, PRE, EXT, RTO, RJD, SOP (when PRE,
+ * RTO or SOP are set, @src parameter must be 0).
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ * On error, a negative error code; first error program counter will
+ * point to offset in descriptor buffer where the instruction should
+ * have been written.
+ */
+#define SEQINPTR(program, src, length, flags) \
+ rta_seq_in_ptr(program, src, length, flags)
+
+/**
+ * SEQOUTPTR - Configures SEQ OUT PTR command
+ * @program: pointer to struct program
+ * @dst: starting address for Output Sequence (uint64_t)
+ * @length: number of bytes in (or to be added to) Output Sequence (uint32_t)
+ * @flags: operational flags: SGF, PRE, EXT, RTO, RST, EWS (when PRE or RTO are
+ * set, @dst parameter must be 0).
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ * On error, a negative error code; first error program counter will
+ * point to offset in descriptor buffer where the instruction should
+ * have been written.
+ */
+#define SEQOUTPTR(program, dst, length, flags) \
+ rta_seq_out_ptr(program, dst, length, flags)
+
+/**
+ * ALG_OPERATION - Configures ALGORITHM OPERATION command
+ * @program: pointer to struct program
+ * @cipher_alg: algorithm to be used
+ * @aai: Additional Algorithm Information; contains mode information that is
+ * associated with the algorithm (check desc.h for specific values).
+ * @algo_state: algorithm state; defines the state of the algorithm that is
+ * being executed (check desc.h file for specific values).
+ * @icv_check: ICV checking; selects whether the algorithm should check
+ * calculated ICV with known ICV: ICV_CHECK_ENABLE,
+ * ICV_CHECK_DISABLE.
+ * @enc: selects between encryption and decryption: DIR_ENC, DIR_DEC
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ * On error, a negative error code; first error program counter will
+ * point to offset in descriptor buffer where the instruction should
+ * have been written.
+ */
+#define ALG_OPERATION(program, cipher_alg, aai, algo_state, icv_check, enc) \
+ rta_operation(program, cipher_alg, aai, algo_state, icv_check, enc)
+
+/**
+ * PROTOCOL - Configures PROTOCOL OPERATION command
+ * @program: pointer to struct program
+ * @optype: operation type: OP_TYPE_UNI_PROTOCOL / OP_TYPE_DECAP_PROTOCOL /
+ * OP_TYPE_ENCAP_PROTOCOL.
+ * @protid: protocol identifier value (check desc.h file for specific values)
+ * @protoinfo: protocol dependent value (check desc.h file for specific values)
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ * On error, a negative error code; first error program counter will
+ * point to offset in descriptor buffer where the instruction should
+ * have been written.
+ */
+#define PROTOCOL(program, optype, protid, protoinfo) \
+ rta_proto_operation(program, optype, protid, protoinfo)
+
+/**
+ * PKHA_OPERATION - Configures PKHA OPERATION command
+ * @program: pointer to struct program
+ * @op_pkha: PKHA operation; indicates the modular arithmetic function to
+ * execute (check desc.h file for specific values).
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ * On error, a negative error code; first error program counter will
+ * point to offset in descriptor buffer where the instruction should
+ * have been written.
+ */
+#define PKHA_OPERATION(program, op_pkha) rta_pkha_operation(program, op_pkha)
+
+/**
+ * JUMP - Configures JUMP command
+ * @program: pointer to struct program
+ * @addr: local offset for local jumps or address pointer for non-local jumps;
+ * IMM or PTR macros must be used to indicate type.
+ * @jump_type: type of action taken by jump (enum rta_jump_type)
+ * @test_type: defines how jump conditions are evaluated (enum rta_jump_cond)
+ * @cond: jump conditions: operational flags - DONE1, DONE2, BOTH; various
+ * sharing and wait conditions (JSL = 1) - NIFP, NIP, NOP, NCP, CALM,
+ * SELF, SHARED, JQP; Math and PKHA status conditions (JSL = 0) - Z, N,
+ * NV, C, PK0, PK1, PKP.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ * On error, a negative error code; first error program counter will
+ * point to offset in descriptor buffer where the instruction should
+ * have been written.
+ */
+#define JUMP(program, addr, jump_type, test_type, cond) \
+ rta_jump(program, addr, jump_type, test_type, cond, NONE)
+
+/**
+ * JUMP_INC - Configures JUMP_INC command
+ * @program: pointer to struct program
+ * @addr: local offset; IMM or PTR macros must be used to indicate type
+ * @test_type: defines how jump conditions are evaluated (enum rta_jump_cond)
+ * @cond: jump conditions: Math status conditions (JSL = 0): Z, N, NV, C
+ * @src_dst: register to increment / decrement: MATH0-MATH3, DPOVRD, SEQINSZ,
+ * SEQOUTSZ, VSEQINSZ, VSEQOUTSZ.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ * On error, a negative error code; first error program counter will
+ * point to offset in descriptor buffer where the instruction should
+ * have been written.
+ */
+#define JUMP_INC(program, addr, test_type, cond, src_dst) \
+ rta_jump(program, addr, LOCAL_JUMP_INC, test_type, cond, src_dst)
+
+/**
+ * JUMP_DEC - Configures JUMP_DEC command
+ * @program: pointer to struct program
+ * @addr: local offset; IMM or PTR macros must be used to indicate type
+ * @test_type: defines how jump conditions are evaluated (enum rta_jump_cond)
+ * @cond: jump conditions: Math status conditions (JSL = 0): Z, N, NV, C
+ * @src_dst: register to increment / decrement: MATH0-MATH3, DPOVRD, SEQINSZ,
+ * SEQOUTSZ, VSEQINSZ, VSEQOUTSZ.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ * On error, a negative error code; first error program counter will
+ * point to offset in descriptor buffer where the instruction should
+ * have been written.
+ */
+#define JUMP_DEC(program, addr, test_type, cond, src_dst) \
+ rta_jump(program, addr, LOCAL_JUMP_DEC, test_type, cond, src_dst)
+
+/**
+ * LOAD - Configures LOAD command to load data registers from descriptor or from
+ * a memory location.
+ * @program: pointer to struct program
+ * @addr: immediate value or pointer to the data to be loaded; IMMED, COPY and
+ * DCOPY flags indicate action taken (inline imm data, inline ptr, inline
+ * from ptr).
+ * @dst: destination register (uint64_t)
+ * @offset: start point to write data in destination register (uint32_t)
+ * @length: number of bytes to load (uint32_t)
+ * @flags: operational flags: VLF, IMMED, COPY, DCOPY
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ * On error, a negative error code; first error program counter will
+ * point to offset in descriptor buffer where the instruction should
+ * have been written.
+ */
+#define LOAD(program, addr, dst, offset, length, flags) \
+ rta_load(program, addr, dst, offset, length, flags)
+
+/**
+ * SEQLOAD - Configures SEQ LOAD command to load data registers from descriptor
+ * or from a memory location.
+ * @program: pointer to struct program
+ * @dst: destination register (uint64_t)
+ * @offset: start point to write data in destination register (uint32_t)
+ * @length: number of bytes to load (uint32_t)
+ * @flags: operational flags: SGF
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ * On error, a negative error code; first error program counter will
+ * point to offset in descriptor buffer where the instruction should
+ * have been written.
+ */
+#define SEQLOAD(program, dst, offset, length, flags) \
+ rta_load(program, NONE, dst, offset, length, flags|SEQ)
+
+/**
+ * STORE - Configures STORE command to read data from registers and write them
+ * to a memory location.
+ * @program: pointer to struct program
+ * @src: immediate value or source register for data to be stored: KEY1SZ,
+ * KEY2SZ, DJQDA, MODE1, MODE2, DJQCTRL, DATA1SZ, DATA2SZ, DSTAT, ICV1SZ,
+ * ICV2SZ, DPID, CCTRL, ICTRL, CLRW, CSTAT, MATH0-MATH3, PKHA registers,
+ * CONTEXT1, CONTEXT2, DESCBUF, JOBDESCBUF, SHAREDESCBUF. In case of
+ * immediate value, IMMED, COPY and DCOPY flags indicate action taken
+ * (inline imm data, inline ptr, inline from ptr).
+ * @offset: start point for reading from source register (uint16_t)
+ * @dst: pointer to store location (uint64_t)
+ * @length: number of bytes to store (uint32_t)
+ * @flags: operational flags: VLF, IMMED, COPY, DCOPY
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ * On error, a negative error code; first error program counter will
+ * point to offset in descriptor buffer where the instruction should
+ * have been written.
+ */
+#define STORE(program, src, offset, dst, length, flags) \
+ rta_store(program, src, offset, dst, length, flags)
+
+/**
+ * SEQSTORE - Configures SEQ STORE command to read data from registers and write
+ * them to a memory location.
+ * @program: pointer to struct program
+ * @src: immediate value or source register for data to be stored: KEY1SZ,
+ * KEY2SZ, DJQDA, MODE1, MODE2, DJQCTRL, DATA1SZ, DATA2SZ, DSTAT, ICV1SZ,
+ * ICV2SZ, DPID, CCTRL, ICTRL, CLRW, CSTAT, MATH0-MATH3, PKHA registers,
+ * CONTEXT1, CONTEXT2, DESCBUF, JOBDESCBUF, SHAREDESCBUF. In case of
+ * immediate value, IMMED, COPY and DCOPY flags indicate action taken
+ * (inline imm data, inline ptr, inline from ptr).
+ * @offset: start point for reading from source register (uint16_t)
+ * @length: number of bytes to store (uint32_t)
+ * @flags: operational flags: SGF, IMMED, COPY, DCOPY
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ * On error, a negative error code; first error program counter will
+ * point to offset in descriptor buffer where the instruction should
+ * have been written.
+ */
+#define SEQSTORE(program, src, offset, length, flags) \
+ rta_store(program, src, offset, NONE, length, flags|SEQ)
+
+/**
+ * MATHB - Configures MATHB command to perform binary operations
+ * @program: pointer to struct program
+ * @operand1: first operand: MATH0-MATH3, DPOVRD, SEQINSZ, SEQOUTSZ, VSEQINSZ,
+ * VSEQOUTSZ, ZERO, ONE, NONE, Immediate value. IMMED must be used to
+ * indicate immediate value.
+ * @operator: function to be performed: ADD, ADDC, SUB, SUBB, OR, AND, XOR,
+ * LSHIFT, RSHIFT, SHLD.
+ * @operand2: second operand: MATH0-MATH3, DPOVRD, VSEQINSZ, VSEQOUTSZ, ABD,
+ * OFIFO, JOBSRC, ZERO, ONE, Immediate value. IMMED2 must be used to
+ * indicate immediate value.
+ * @result: destination for the result: MATH0-MATH3, DPOVRD, SEQINSZ, SEQOUTSZ,
+ * NONE, VSEQINSZ, VSEQOUTSZ.
+ * @length: length in bytes of the operation and the immediate value, if there
+ * is one (int).
+ * @opt: operational flags: IFB, NFU, STL, SWP, IMMED, IMMED2
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ * On error, a negative error code; first error program counter will
+ * point to offset in descriptor buffer where the instruction should
+ * have been written.
+ */
+#define MATHB(program, operand1, operator, operand2, result, length, opt) \
+ rta_math(program, operand1, MATH_FUN_##operator, operand2, result, \
+ length, opt)
+
+/**
+ * MATHI - Configures MATHI command to perform binary operations
+ * @program: pointer to struct program
+ * @operand: if !SSEL: MATH0-MATH3, DPOVRD, SEQINSZ, SEQOUTSZ, VSEQINSZ,
+ * VSEQOUTSZ, ZERO, ONE.
+ * if SSEL: MATH0-MATH3, DPOVRD, VSEQINSZ, VSEQOUTSZ, ABD, OFIFO,
+ * JOBSRC, ZERO, ONE.
+ * @operator: function to be performed: ADD, ADDC, SUB, SUBB, OR, AND, XOR,
+ * LSHIFT, RSHIFT, FBYT (for !SSEL only).
+ * @imm: Immediate value (uint8_t). IMMED must be used to indicate immediate
+ * value.
+ * @result: destination for the result: MATH0-MATH3, DPOVRD, SEQINSZ, SEQOUTSZ,
+ * NONE, VSEQINSZ, VSEQOUTSZ.
+ * @length: length in bytes of the operation and the immediate value, if there
+ * is one (int). @imm is left-extended with zeros if needed.
+ * @opt: operational flags: NFU, SSEL, SWP, IMMED
+ *
+ * If !SSEL, @operand <@operator> @imm -> @result
+ * If SSEL, @imm <@operator> @operand -> @result
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ * On error, a negative error code; first error program counter will
+ * point to offset in descriptor buffer where the instruction should
+ * have been written.
+ */
+#define MATHI(program, operand, operator, imm, result, length, opt) \
+ rta_mathi(program, operand, MATH_FUN_##operator, imm, result, length, \
+ opt)
+
+/**
+ * MATHU - Configures MATHU command to perform unary operations
+ * @program: pointer to struct program
+ * @operand1: operand: MATH0-MATH3, DPOVRD, SEQINSZ, SEQOUTSZ, VSEQINSZ,
+ * VSEQOUTSZ, ZERO, ONE, NONE, Immediate value. IMMED must be used to
+ * indicate immediate value.
+ * @operator: function to be performed: ZBYT, BSWAP
+ * @result: destination for the result: MATH0-MATH3, DPOVRD, SEQINSZ, SEQOUTSZ,
+ * NONE, VSEQINSZ, VSEQOUTSZ.
+ * @length: length in bytes of the operation and the immediate value, if there
+ * is one (int).
+ * @opt: operational flags: NFU, STL, SWP, IMMED
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ * On error, a negative error code; first error program counter will
+ * point to offset in descriptor buffer where the instruction should
+ * have been written.
+ */
+#define MATHU(program, operand1, operator, result, length, opt) \
+ rta_math(program, operand1, MATH_FUN_##operator, NONE, result, length, \
+ opt)
+
+/**
+ * SIGNATURE - Configures SIGNATURE command
+ * @program: pointer to struct program
+ * @sign_type: signature type: SIGN_TYPE_FINAL, SIGN_TYPE_FINAL_RESTORE,
+ * SIGN_TYPE_FINAL_NONZERO, SIGN_TYPE_IMM_2, SIGN_TYPE_IMM_3,
+ * SIGN_TYPE_IMM_4.
+ *
+ * After SIGNATURE command, DWORD or WORD must be used to insert signature in
+ * descriptor buffer.
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ * On error, a negative error code; first error program counter will
+ * point to offset in descriptor buffer where the instruction should
+ * have been written.
+ */
+#define SIGNATURE(program, sign_type) rta_signature(program, sign_type)
+
+/**
+ * NFIFOADD - Configures NFIFO command, a shortcut of RTA Load command to write
+ * to iNfo FIFO.
+ * @program: pointer to struct program
+ * @src: source for the input data in Alignment Block:IFIFO, OFIFO, PAD,
+ * MSGOUTSNOOP, ALTSOURCE, OFIFO_SYNC, MSGOUTSNOOP_ALT.
+ * @data: type of data that is going through the Input Data FIFO: MSG, MSG1,
+ * MSG2, IV1, IV2, ICV1, ICV2, SAD1, AAD1, AAD2, AFHA_SBOX, SKIP,
+ * PKHA registers, AB1, AB2, ABD.
+ * @length: length of the data copied in FIFO registers (uint32_t)
+ * @flags: select options between:
+ * -operational flags: LAST1, LAST2, FLUSH1, FLUSH2, OC, BP
+ * -when PAD is selected as source: BM, PR, PS
+ * -padding type: <em>PAD_ZERO, PAD_NONZERO, PAD_INCREMENT, PAD_RANDOM,
+ * PAD_ZERO_N1, PAD_NONZERO_0, PAD_N1, PAD_NONZERO_N
+ *
+ * Return: On success, descriptor buffer offset where this command is inserted.
+ * On error, a negative error code; first error program counter will
+ * point to offset in descriptor buffer where the instruction should
+ * have been written.
+ */
+#define NFIFOADD(program, src, data, length, flags) \
+ rta_nfifo_load(program, src, data, length, flags)
+
+/**
+ * DOC: Self Referential Code Management Routines
+ *
+ * Contains details of RTA self referential code routines.
+ */
+
+/**
+ * REFERENCE - initialize a variable used for storing an index inside a
+ * descriptor buffer.
+ * @ref: reference to a descriptor buffer's index where an update is required
+ * with a value that will be known latter in the program flow.
+ */
+#define REFERENCE(ref) int ref = -1
+
+/**
+ * LABEL - initialize a variable used for storing an index inside a descriptor
+ * buffer.
+ * @label: label stores the value with what should be updated the REFERENCE line
+ * in the descriptor buffer.
+ */
+#define LABEL(label) unsigned label = 0
+
+/**
+ * SET_LABEL - set a LABEL value
+ * @program: pointer to struct program
+ * @label: value that will be inserted in a line previously written in the
+ * descriptor buffer.
+ */
+#define SET_LABEL(program, label) label = rta_set_label(program)
+
+/**
+ * PATCH_JUMP - Auxiliary command to resolve self referential code
+ * @program: buffer to be updated (struct program *)
+ * @line: position in descriptor buffer where the update will be done; this
+ * value is previously retained in program flow using a reference near
+ * the sequence to be modified.
+ * @new_ref: updated value that will be inserted in descriptor buffer at the
+ * specified line; this value is previously obtained using SET_LABEL
+ * macro near the line that will be used as reference (unsigned). For
+ * JUMP command, the value represents the offset field (in words).
+ *
+ * Return: 0 in case of success, a negative error code if it fails
+ */
+#define PATCH_JUMP(program, line, new_ref) \
+ rta_patch_jmp(program, line, new_ref, false)
+
+/**
+ * PATCH_JUMP_NON_LOCAL - Auxiliary command to resolve referential code between
+ * two program buffers.
+ * @src_program: buffer to be updated (struct program *)
+ * @line: position in source descriptor buffer where the update will be done;
+ * this value is previously retained in program flow using a reference
+ * near the sequence to be modified.
+ * @new_ref: updated value that will be inserted in descriptor buffer at the
+ * specified line; this value is previously obtained using SET_LABEL
+ * macro near the line that will be used as reference (unsigned). For
+ * JUMP command, the value represents the offset field (in words).
+ *
+ * Return: 0 in case of success, a negative error code if it fails
+ */
+#define PATCH_JUMP_NON_LOCAL(src_program, line, new_ref) \
+ rta_patch_jmp(src_program, line, new_ref, true)
+
+/**
+ * PATCH_MOVE - Auxiliary command to resolve self referential code
+ * @program: buffer to be updated (struct program *)
+ * @line: position in descriptor buffer where the update will be done; this
+ * value is previously retained in program flow using a reference near
+ * the sequence to be modified.
+ * @new_ref: updated value that will be inserted in descriptor buffer at the
+ * specified line; this value is previously obtained using SET_LABEL
+ * macro near the line that will be used as reference (unsigned). For
+ * MOVE command, the value represents the offset field (in words).
+ *
+ * Return: 0 in case of success, a negative error code if it fails
+ */
+#define PATCH_MOVE(program, line, new_ref) \
+ rta_patch_move(program, line, new_ref, false)
+
+/**
+ * PATCH_MOVE_NON_LOCAL - Auxiliary command to resolve referential code between
+ * two program buffers.
+ * @src_program: buffer to be updated (struct program *)
+ * @line: position in source descriptor buffer where the update will be done;
+ * this value is previously retained in program flow using a reference
+ * near the sequence to be modified.
+ * @new_ref: updated value that will be inserted in source descriptor buffer at
+ * the specified line; this value is previously obtained using
+ * SET_LABEL macro near the line that will be used as reference
+ * (unsigned). For MOVE command, the value represents the offset
+ * field (in words).
+ *
+ * Return: 0 in case of success, a negative error code if it fails
+ */
+#define PATCH_MOVE_NON_LOCAL(src_program, line, new_ref) \
+ rta_patch_move(src_program, line, new_ref, true)
+
+/**
+ * PATCH_LOAD - Auxiliary command to resolve self referential code
+ * @program: buffer to be updated (struct program *)
+ * @line: position in descriptor buffer where the update will be done; this
+ * value is previously retained in program flow using a reference near
+ * the sequence to be modified.
+ * @new_ref: updated value that will be inserted in descriptor buffer at the
+ * specified line; this value is previously obtained using SET_LABEL
+ * macro near the line that will be used as reference (unsigned). For
+ * LOAD command, the value represents the offset field (in words).
+ *
+ * Return: 0 in case of success, a negative error code if it fails
+ */
+#define PATCH_LOAD(program, line, new_ref) \
+ rta_patch_load(program, line, new_ref)
+
+/**
+ * PATCH_STORE - Auxiliary command to resolve self referential code
+ * @program: buffer to be updated (struct program *)
+ * @line: position in descriptor buffer where the update will be done; this
+ * value is previously retained in program flow using a reference near
+ * the sequence to be modified.
+ * @new_ref: updated value that will be inserted in descriptor buffer at the
+ * specified line; this value is previously obtained using SET_LABEL
+ * macro near the line that will be used as reference (unsigned). For
+ * STORE command, the value represents the offset field (in words).
+ *
+ * Return: 0 in case of success, a negative error code if it fails
+ */
+#define PATCH_STORE(program, line, new_ref) \
+ rta_patch_store(program, line, new_ref, false)
+
+/**
+ * PATCH_STORE_NON_LOCAL - Auxiliary command to resolve referential code between
+ * two program buffers.
+ * @src_program: buffer to be updated (struct program *)
+ * @line: position in source descriptor buffer where the update will be done;
+ * this value is previously retained in program flow using a reference
+ * near the sequence to be modified.
+ * @new_ref: updated value that will be inserted in source descriptor buffer at
+ * the specified line; this value is previously obtained using
+ * SET_LABEL macro near the line that will be used as reference
+ * (unsigned). For STORE command, the value represents the offset
+ * field (in words).
+ *
+ * Return: 0 in case of success, a negative error code if it fails
+ */
+#define PATCH_STORE_NON_LOCAL(src_program, line, new_ref) \
+ rta_patch_store(src_program, line, new_ref, true)
+
+/**
+ * PATCH_HDR - Auxiliary command to resolve self referential code
+ * @program: buffer to be updated (struct program *)
+ * @line: position in descriptor buffer where the update will be done; this
+ * value is previously retained in program flow using a reference near
+ * the sequence to be modified.
+ * @new_ref: updated value that will be inserted in descriptor buffer at the
+ * specified line; this value is previously obtained using SET_LABEL
+ * macro near the line that will be used as reference (unsigned). For
+ * HEADER command, the value represents the start index field.
+ *
+ * Return: 0 in case of success, a negative error code if it fails
+ */
+#define PATCH_HDR(program, line, new_ref) \
+ rta_patch_header(program, line, new_ref, false)
+
+/**
+ * PATCH_HDR_NON_LOCAL - Auxiliary command to resolve referential code between
+ * two program buffers.
+ * @src_program: buffer to be updated (struct program *)
+ * @line: position in source descriptor buffer where the update will be done;
+ * this value is previously retained in program flow using a reference
+ * near the sequence to be modified.
+ * @new_ref: updated value that will be inserted in source descriptor buffer at
+ * the specified line; this value is previously obtained using
+ * SET_LABEL macro near the line that will be used as reference
+ * (unsigned). For HEADER command, the value represents the start
+ * index field.
+ *
+ * Return: 0 in case of success, a negative error code if it fails
+ */
+#define PATCH_HDR_NON_LOCAL(src_program, line, new_ref) \
+ rta_patch_header(src_program, line, new_ref, true)
+
+/**
+ * PATCH_RAW - Auxiliary command to resolve self referential code
+ * @program: buffer to be updated (struct program *)
+ * @line: position in descriptor buffer where the update will be done; this
+ * value is previously retained in program flow using a reference near
+ * the sequence to be modified.
+ * @mask: mask to be used for applying the new value (unsigned). The mask
+ * selects which bits from the provided @new_val are taken into
+ * consideration when overwriting the existing value.
+ * @new_val: updated value that will be masked using the provided mask value
+ * and inserted in descriptor buffer at the specified line.
+ *
+ * Return: 0 in case of success, a negative error code if it fails
+ */
+#define PATCH_RAW(program, line, mask, new_val) \
+ rta_patch_raw(program, line, mask, new_val, false)
+
+/**
+ * PATCH_RAW_NON_LOCAL - Auxiliary command to resolve referential code between
+ * two program buffers.
+ * @src_program: buffer to be updated (struct program *)
+ * @line: position in source descriptor buffer where the update will be done;
+ * this value is previously retained in program flow using a reference
+ * near the sequence to be modified.
+ * @mask: mask to be used for applying the new value (unsigned). The mask
+ * selects which bits from the provided @new_val are taken into
+ * consideration when overwriting the existing value.
+ * @new_val: updated value that will be masked using the provided mask value
+ * and inserted in descriptor buffer at the specified line.
+ *
+ * Return: 0 in case of success, a negative error code if it fails
+ */
+#define PATCH_RAW_NON_LOCAL(src_program, line, mask, new_val) \
+ rta_patch_raw(src_program, line, mask, new_val, true)
+
+#endif /* __RTA_RTA_H__ */
diff --git a/drivers/crypto/caam/flib/rta/protocol_cmd.h b/drivers/crypto/caam/flib/rta/protocol_cmd.h
new file mode 100644
index 000000000000..38f544adde74
--- /dev/null
+++ b/drivers/crypto/caam/flib/rta/protocol_cmd.h
@@ -0,0 +1,595 @@
+/* Copyright 2008-2013 Freescale Semiconductor, Inc. */
+
+#ifndef __RTA_PROTOCOL_CMD_H__
+#define __RTA_PROTOCOL_CMD_H__
+
+extern enum rta_sec_era rta_sec_era;
+
+static inline int __rta_ssl_proto(uint16_t protoinfo)
+{
+ switch (protoinfo) {
+ case OP_PCL_SSL30_RC4_40_MD5_2:
+ case OP_PCL_SSL30_RC4_128_MD5_2:
+ case OP_PCL_SSL30_RC4_128_SHA_5:
+ case OP_PCL_SSL30_RC4_40_MD5_3:
+ case OP_PCL_SSL30_RC4_128_MD5_3:
+ case OP_PCL_SSL30_RC4_128_SHA:
+ case OP_PCL_SSL30_RC4_128_MD5:
+ case OP_PCL_SSL30_RC4_40_SHA:
+ case OP_PCL_SSL30_RC4_40_MD5:
+ case OP_PCL_SSL30_RC4_128_SHA_2:
+ case OP_PCL_SSL30_RC4_128_SHA_3:
+ case OP_PCL_SSL30_RC4_128_SHA_4:
+ case OP_PCL_SSL30_RC4_128_SHA_6:
+ case OP_PCL_SSL30_RC4_128_SHA_7:
+ case OP_PCL_SSL30_RC4_128_SHA_8:
+ case OP_PCL_SSL30_RC4_128_SHA_9:
+ case OP_PCL_SSL30_RC4_128_SHA_10:
+ case OP_PCL_TLS_ECDHE_PSK_RC4_128_SHA:
+ if (rta_sec_era == RTA_SEC_ERA_7)
+ return -EINVAL;
+ /* fall through if not Era 7 */
+ case OP_PCL_SSL30_DES40_CBC_SHA:
+ case OP_PCL_SSL30_DES_CBC_SHA_2:
+ case OP_PCL_SSL30_3DES_EDE_CBC_SHA_5:
+ case OP_PCL_SSL30_DES40_CBC_SHA_2:
+ case OP_PCL_SSL30_DES_CBC_SHA_3:
+ case OP_PCL_SSL30_3DES_EDE_CBC_SHA_6:
+ case OP_PCL_SSL30_DES40_CBC_SHA_3:
+ case OP_PCL_SSL30_DES_CBC_SHA_4:
+ case OP_PCL_SSL30_3DES_EDE_CBC_SHA_7:
+ case OP_PCL_SSL30_DES40_CBC_SHA_4:
+ case OP_PCL_SSL30_DES_CBC_SHA_5:
+ case OP_PCL_SSL30_3DES_EDE_CBC_SHA_8:
+ case OP_PCL_SSL30_DES40_CBC_SHA_5:
+ case OP_PCL_SSL30_DES_CBC_SHA_6:
+ case OP_PCL_SSL30_3DES_EDE_CBC_SHA_9:
+ case OP_PCL_SSL30_DES40_CBC_SHA_6:
+ case OP_PCL_SSL30_DES_CBC_SHA_7:
+ case OP_PCL_SSL30_3DES_EDE_CBC_SHA_10:
+ case OP_PCL_SSL30_DES_CBC_SHA:
+ case OP_PCL_SSL30_3DES_EDE_CBC_SHA:
+ case OP_PCL_SSL30_DES_CBC_MD5:
+ case OP_PCL_SSL30_3DES_EDE_CBC_MD5:
+ case OP_PCL_SSL30_DES40_CBC_SHA_7:
+ case OP_PCL_SSL30_DES40_CBC_MD5:
+ case OP_PCL_SSL30_AES_128_CBC_SHA:
+ case OP_PCL_SSL30_AES_128_CBC_SHA_2:
+ case OP_PCL_SSL30_AES_128_CBC_SHA_3:
+ case OP_PCL_SSL30_AES_128_CBC_SHA_4:
+ case OP_PCL_SSL30_AES_128_CBC_SHA_5:
+ case OP_PCL_SSL30_AES_128_CBC_SHA_6:
+ case OP_PCL_SSL30_AES_256_CBC_SHA:
+ case OP_PCL_SSL30_AES_256_CBC_SHA_2:
+ case OP_PCL_SSL30_AES_256_CBC_SHA_3:
+ case OP_PCL_SSL30_AES_256_CBC_SHA_4:
+ case OP_PCL_SSL30_AES_256_CBC_SHA_5:
+ case OP_PCL_SSL30_AES_256_CBC_SHA_6:
+ case OP_PCL_TLS12_AES_128_CBC_SHA256_2:
+ case OP_PCL_TLS12_AES_128_CBC_SHA256_3:
+ case OP_PCL_TLS12_AES_128_CBC_SHA256_4:
+ case OP_PCL_TLS12_AES_128_CBC_SHA256_5:
+ case OP_PCL_TLS12_AES_256_CBC_SHA256_2:
+ case OP_PCL_TLS12_AES_256_CBC_SHA256_3:
+ case OP_PCL_TLS12_AES_256_CBC_SHA256_4:
+ case OP_PCL_TLS12_AES_256_CBC_SHA256_5:
+ case OP_PCL_TLS12_AES_128_CBC_SHA256_6:
+ case OP_PCL_TLS12_AES_256_CBC_SHA256_6:
+ case OP_PCL_SSL30_3DES_EDE_CBC_SHA_2:
+ case OP_PCL_SSL30_AES_128_CBC_SHA_7:
+ case OP_PCL_SSL30_AES_256_CBC_SHA_7:
+ case OP_PCL_SSL30_3DES_EDE_CBC_SHA_3:
+ case OP_PCL_SSL30_AES_128_CBC_SHA_8:
+ case OP_PCL_SSL30_AES_256_CBC_SHA_8:
+ case OP_PCL_SSL30_3DES_EDE_CBC_SHA_4:
+ case OP_PCL_SSL30_AES_128_CBC_SHA_9:
+ case OP_PCL_SSL30_AES_256_CBC_SHA_9:
+ case OP_PCL_SSL30_AES_128_GCM_SHA256_1:
+ case OP_PCL_SSL30_AES_256_GCM_SHA384_1:
+ case OP_PCL_SSL30_AES_128_GCM_SHA256_2:
+ case OP_PCL_SSL30_AES_256_GCM_SHA384_2:
+ case OP_PCL_SSL30_AES_128_GCM_SHA256_3:
+ case OP_PCL_SSL30_AES_256_GCM_SHA384_3:
+ case OP_PCL_SSL30_AES_128_GCM_SHA256_4:
+ case OP_PCL_SSL30_AES_256_GCM_SHA384_4:
+ case OP_PCL_SSL30_AES_128_GCM_SHA256_5:
+ case OP_PCL_SSL30_AES_256_GCM_SHA384_5:
+ case OP_PCL_SSL30_AES_128_GCM_SHA256_6:
+ case OP_PCL_TLS_DH_ANON_AES_256_GCM_SHA384:
+ case OP_PCL_TLS_PSK_AES_128_GCM_SHA256:
+ case OP_PCL_TLS_PSK_AES_256_GCM_SHA384:
+ case OP_PCL_TLS_DHE_PSK_AES_128_GCM_SHA256:
+ case OP_PCL_TLS_DHE_PSK_AES_256_GCM_SHA384:
+ case OP_PCL_TLS_RSA_PSK_AES_128_GCM_SHA256:
+ case OP_PCL_TLS_RSA_PSK_AES_256_GCM_SHA384:
+ case OP_PCL_TLS_PSK_AES_128_CBC_SHA256:
+ case OP_PCL_TLS_PSK_AES_256_CBC_SHA384:
+ case OP_PCL_TLS_DHE_PSK_AES_128_CBC_SHA256:
+ case OP_PCL_TLS_DHE_PSK_AES_256_CBC_SHA384:
+ case OP_PCL_TLS_RSA_PSK_AES_128_CBC_SHA256:
+ case OP_PCL_TLS_RSA_PSK_AES_256_CBC_SHA384:
+ case OP_PCL_SSL30_3DES_EDE_CBC_SHA_11:
+ case OP_PCL_SSL30_AES_128_CBC_SHA_10:
+ case OP_PCL_SSL30_AES_256_CBC_SHA_10:
+ case OP_PCL_SSL30_3DES_EDE_CBC_SHA_12:
+ case OP_PCL_SSL30_AES_128_CBC_SHA_11:
+ case OP_PCL_SSL30_AES_256_CBC_SHA_11:
+ case OP_PCL_SSL30_AES_128_CBC_SHA_12:
+ case OP_PCL_SSL30_3DES_EDE_CBC_SHA_13:
+ case OP_PCL_SSL30_AES_256_CBC_SHA_12:
+ case OP_PCL_SSL30_3DES_EDE_CBC_SHA_14:
+ case OP_PCL_SSL30_AES_128_CBC_SHA_13:
+ case OP_PCL_SSL30_AES_256_CBC_SHA_13:
+ case OP_PCL_SSL30_3DES_EDE_CBC_SHA_15:
+ case OP_PCL_SSL30_AES_128_CBC_SHA_14:
+ case OP_PCL_SSL30_AES_256_CBC_SHA_14:
+ case OP_PCL_SSL30_3DES_EDE_CBC_SHA_16:
+ case OP_PCL_SSL30_3DES_EDE_CBC_SHA_17:
+ case OP_PCL_SSL30_3DES_EDE_CBC_SHA_18:
+ case OP_PCL_SSL30_AES_128_CBC_SHA_15:
+ case OP_PCL_SSL30_AES_128_CBC_SHA_16:
+ case OP_PCL_SSL30_AES_128_CBC_SHA_17:
+ case OP_PCL_SSL30_AES_256_CBC_SHA_15:
+ case OP_PCL_SSL30_AES_256_CBC_SHA_16:
+ case OP_PCL_SSL30_AES_256_CBC_SHA_17:
+ case OP_PCL_TLS_ECDHE_ECDSA_AES_128_CBC_SHA256:
+ case OP_PCL_TLS_ECDHE_ECDSA_AES_256_CBC_SHA384:
+ case OP_PCL_TLS_ECDH_ECDSA_AES_128_CBC_SHA256:
+ case OP_PCL_TLS_ECDH_ECDSA_AES_256_CBC_SHA384:
+ case OP_PCL_TLS_ECDHE_RSA_AES_128_CBC_SHA256:
+ case OP_PCL_TLS_ECDHE_RSA_AES_256_CBC_SHA384:
+ case OP_PCL_TLS_ECDH_RSA_AES_128_CBC_SHA256:
+ case OP_PCL_TLS_ECDH_RSA_AES_256_CBC_SHA384:
+ case OP_PCL_TLS_ECDHE_ECDSA_AES_128_GCM_SHA256:
+ case OP_PCL_TLS_ECDHE_ECDSA_AES_256_GCM_SHA384:
+ case OP_PCL_TLS_ECDH_ECDSA_AES_128_GCM_SHA256:
+ case OP_PCL_TLS_ECDH_ECDSA_AES_256_GCM_SHA384:
+ case OP_PCL_TLS_ECDHE_RSA_AES_128_GCM_SHA256:
+ case OP_PCL_TLS_ECDHE_RSA_AES_256_GCM_SHA384:
+ case OP_PCL_TLS_ECDH_RSA_AES_128_GCM_SHA256:
+ case OP_PCL_TLS_ECDH_RSA_AES_256_GCM_SHA384:
+ case OP_PCL_TLS_ECDHE_PSK_3DES_EDE_CBC_SHA:
+ case OP_PCL_TLS_ECDHE_PSK_AES_128_CBC_SHA:
+ case OP_PCL_TLS_ECDHE_PSK_AES_256_CBC_SHA:
+ case OP_PCL_TLS_ECDHE_PSK_AES_128_CBC_SHA256:
+ case OP_PCL_TLS_ECDHE_PSK_AES_256_CBC_SHA384:
+ case OP_PCL_TLS12_3DES_EDE_CBC_MD5:
+ case OP_PCL_TLS12_3DES_EDE_CBC_SHA160:
+ case OP_PCL_TLS12_3DES_EDE_CBC_SHA224:
+ case OP_PCL_TLS12_3DES_EDE_CBC_SHA256:
+ case OP_PCL_TLS12_3DES_EDE_CBC_SHA384:
+ case OP_PCL_TLS12_3DES_EDE_CBC_SHA512:
+ case OP_PCL_TLS12_AES_128_CBC_SHA160:
+ case OP_PCL_TLS12_AES_128_CBC_SHA224:
+ case OP_PCL_TLS12_AES_128_CBC_SHA256:
+ case OP_PCL_TLS12_AES_128_CBC_SHA384:
+ case OP_PCL_TLS12_AES_128_CBC_SHA512:
+ case OP_PCL_TLS12_AES_192_CBC_SHA160:
+ case OP_PCL_TLS12_AES_192_CBC_SHA224:
+ case OP_PCL_TLS12_AES_192_CBC_SHA256:
+ case OP_PCL_TLS12_AES_192_CBC_SHA512:
+ case OP_PCL_TLS12_AES_256_CBC_SHA160:
+ case OP_PCL_TLS12_AES_256_CBC_SHA224:
+ case OP_PCL_TLS12_AES_256_CBC_SHA256:
+ case OP_PCL_TLS12_AES_256_CBC_SHA384:
+ case OP_PCL_TLS12_AES_256_CBC_SHA512:
+ case OP_PCL_TLS_PVT_AES_192_CBC_SHA160:
+ case OP_PCL_TLS_PVT_AES_192_CBC_SHA384:
+ case OP_PCL_TLS_PVT_AES_192_CBC_SHA224:
+ case OP_PCL_TLS_PVT_AES_192_CBC_SHA512:
+ case OP_PCL_TLS_PVT_AES_192_CBC_SHA256:
+ case OP_PCL_TLS_PVT_MASTER_SECRET_PRF_FE:
+ case OP_PCL_TLS_PVT_MASTER_SECRET_PRF_FF:
+ return 0;
+ }
+
+ return -EINVAL;
+}
+
+static inline int __rta_ike_proto(uint16_t protoinfo)
+{
+ switch (protoinfo) {
+ case OP_PCL_IKE_HMAC_MD5:
+ case OP_PCL_IKE_HMAC_SHA1:
+ case OP_PCL_IKE_HMAC_AES128_CBC:
+ case OP_PCL_IKE_HMAC_SHA256:
+ case OP_PCL_IKE_HMAC_SHA384:
+ case OP_PCL_IKE_HMAC_SHA512:
+ case OP_PCL_IKE_HMAC_AES128_CMAC:
+ return 0;
+ }
+
+ return -EINVAL;
+}
+
+static inline int __rta_ipsec_proto(uint16_t protoinfo)
+{
+ uint16_t proto_cls1 = protoinfo & OP_PCL_IPSEC_CIPHER_MASK;
+ uint16_t proto_cls2 = protoinfo & OP_PCL_IPSEC_AUTH_MASK;
+
+ switch (proto_cls1) {
+ case OP_PCL_IPSEC_NULL:
+ case OP_PCL_IPSEC_AES_NULL_WITH_GMAC:
+ if (rta_sec_era < RTA_SEC_ERA_2)
+ return -EINVAL;
+ break;
+ case OP_PCL_IPSEC_AES_CCM8:
+ case OP_PCL_IPSEC_AES_CCM12:
+ case OP_PCL_IPSEC_AES_CCM16:
+ case OP_PCL_IPSEC_AES_GCM8:
+ case OP_PCL_IPSEC_AES_GCM12:
+ if (proto_cls2 == OP_PCL_IPSEC_HMAC_NULL)
+ return 0;
+ /* no break */
+ case OP_PCL_IPSEC_DES_IV64:
+ case OP_PCL_IPSEC_DES:
+ case OP_PCL_IPSEC_3DES:
+ case OP_PCL_IPSEC_AES_CBC:
+ case OP_PCL_IPSEC_AES_CTR:
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ switch (proto_cls2) {
+ case OP_PCL_IPSEC_HMAC_MD5_96:
+ case OP_PCL_IPSEC_HMAC_SHA1_96:
+ case OP_PCL_IPSEC_AES_XCBC_MAC_96:
+ case OP_PCL_IPSEC_HMAC_MD5_128:
+ case OP_PCL_IPSEC_HMAC_SHA1_160:
+ case OP_PCL_IPSEC_AES_CMAC_96:
+ case OP_PCL_IPSEC_HMAC_SHA2_256_128:
+ case OP_PCL_IPSEC_HMAC_SHA2_384_192:
+ case OP_PCL_IPSEC_HMAC_SHA2_512_256:
+ return 0;
+ }
+
+ return -EINVAL;
+}
+
+static inline int __rta_srtp_proto(uint16_t protoinfo)
+{
+ uint16_t proto_cls1 = protoinfo & OP_PCL_SRTP_CIPHER_MASK;
+ uint16_t proto_cls2 = protoinfo & OP_PCL_SRTP_AUTH_MASK;
+
+ switch (proto_cls1) {
+ case OP_PCL_SRTP_AES_CTR:
+ switch (proto_cls2) {
+ case OP_PCL_SRTP_HMAC_SHA1_160:
+ return 0;
+ }
+ /* no break */
+ }
+
+ return -EINVAL;
+}
+
+static inline int __rta_macsec_proto(uint16_t protoinfo)
+{
+ switch (protoinfo) {
+ case OP_PCL_MACSEC:
+ return 0;
+ }
+
+ return -EINVAL;
+}
+
+static inline int __rta_wifi_proto(uint16_t protoinfo)
+{
+ switch (protoinfo) {
+ case OP_PCL_WIFI:
+ return 0;
+ }
+
+ return -EINVAL;
+}
+
+static inline int __rta_wimax_proto(uint16_t protoinfo)
+{
+ switch (protoinfo) {
+ case OP_PCL_WIMAX_OFDM:
+ case OP_PCL_WIMAX_OFDMA:
+ return 0;
+ }
+
+ return -EINVAL;
+}
+
+/* Allowed blob proto flags for each SEC Era */
+static const uint32_t proto_blob_flags[] = {
+ OP_PCL_BLOB_FORMAT_MASK | OP_PCL_BLOB_BLACK,
+ OP_PCL_BLOB_FORMAT_MASK | OP_PCL_BLOB_BLACK | OP_PCL_BLOB_TKEK |
+ OP_PCL_BLOB_EKT | OP_PCL_BLOB_REG_MASK,
+ OP_PCL_BLOB_FORMAT_MASK | OP_PCL_BLOB_BLACK | OP_PCL_BLOB_TKEK |
+ OP_PCL_BLOB_EKT | OP_PCL_BLOB_REG_MASK,
+ OP_PCL_BLOB_FORMAT_MASK | OP_PCL_BLOB_BLACK | OP_PCL_BLOB_TKEK |
+ OP_PCL_BLOB_EKT | OP_PCL_BLOB_REG_MASK | OP_PCL_BLOB_SEC_MEM,
+ OP_PCL_BLOB_FORMAT_MASK | OP_PCL_BLOB_BLACK | OP_PCL_BLOB_TKEK |
+ OP_PCL_BLOB_EKT | OP_PCL_BLOB_REG_MASK | OP_PCL_BLOB_SEC_MEM
+};
+
+static inline int __rta_blob_proto(uint16_t protoinfo)
+{
+ if (protoinfo & ~proto_blob_flags[rta_sec_era])
+ return -EINVAL;
+
+ switch (protoinfo & OP_PCL_BLOB_FORMAT_MASK) {
+ case OP_PCL_BLOB_FORMAT_NORMAL:
+ case OP_PCL_BLOB_FORMAT_MASTER_VER:
+ case OP_PCL_BLOB_FORMAT_TEST:
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ switch (protoinfo & OP_PCL_BLOB_REG_MASK) {
+ case OP_PCL_BLOB_AFHA_SBOX:
+ if (rta_sec_era < RTA_SEC_ERA_3)
+ return -EINVAL;
+ /* no break */
+ case OP_PCL_BLOB_REG_MEMORY:
+ case OP_PCL_BLOB_REG_KEY1:
+ case OP_PCL_BLOB_REG_KEY2:
+ case OP_PCL_BLOB_REG_SPLIT:
+ case OP_PCL_BLOB_REG_PKE:
+ return 0;
+ }
+
+ return -EINVAL;
+}
+
+static inline int __rta_dlc_proto(uint16_t protoinfo)
+{
+ if ((rta_sec_era < RTA_SEC_ERA_2) &&
+ (protoinfo & (OP_PCL_PKPROT_DSA_MSG | OP_PCL_PKPROT_HASH_MASK |
+ OP_PCL_PKPROT_EKT_Z | OP_PCL_PKPROT_DECRYPT_Z |
+ OP_PCL_PKPROT_DECRYPT_PRI)))
+ return -EINVAL;
+
+ switch (protoinfo & OP_PCL_PKPROT_HASH_MASK) {
+ case OP_PCL_PKPROT_HASH_MD5:
+ case OP_PCL_PKPROT_HASH_SHA1:
+ case OP_PCL_PKPROT_HASH_SHA224:
+ case OP_PCL_PKPROT_HASH_SHA256:
+ case OP_PCL_PKPROT_HASH_SHA384:
+ case OP_PCL_PKPROT_HASH_SHA512:
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static inline int __rta_rsa_enc_proto(uint16_t protoinfo)
+{
+ switch (protoinfo & OP_PCL_RSAPROT_OP_MASK) {
+ case OP_PCL_RSAPROT_OP_ENC_F_IN:
+ if ((protoinfo & OP_PCL_RSAPROT_FFF_MASK) !=
+ OP_PCL_RSAPROT_FFF_RED)
+ return -EINVAL;
+ break;
+ case OP_PCL_RSAPROT_OP_ENC_F_OUT:
+ switch (protoinfo & OP_PCL_RSAPROT_FFF_MASK) {
+ case OP_PCL_RSAPROT_FFF_RED:
+ case OP_PCL_RSAPROT_FFF_ENC:
+ case OP_PCL_RSAPROT_FFF_EKT:
+ case OP_PCL_RSAPROT_FFF_TK_ENC:
+ case OP_PCL_RSAPROT_FFF_TK_EKT:
+ break;
+ default:
+ return -EINVAL;
+ }
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static inline int __rta_rsa_dec_proto(uint16_t protoinfo)
+{
+ switch (protoinfo & OP_PCL_RSAPROT_OP_MASK) {
+ case OP_PCL_RSAPROT_OP_DEC_ND:
+ case OP_PCL_RSAPROT_OP_DEC_PQD:
+ case OP_PCL_RSAPROT_OP_DEC_PQDPDQC:
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ switch (protoinfo & OP_PCL_RSAPROT_PPP_MASK) {
+ case OP_PCL_RSAPROT_PPP_RED:
+ case OP_PCL_RSAPROT_PPP_ENC:
+ case OP_PCL_RSAPROT_PPP_EKT:
+ case OP_PCL_RSAPROT_PPP_TK_ENC:
+ case OP_PCL_RSAPROT_PPP_TK_EKT:
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ if (protoinfo & OP_PCL_RSAPROT_FMT_PKCSV15)
+ switch (protoinfo & OP_PCL_RSAPROT_FFF_MASK) {
+ case OP_PCL_RSAPROT_FFF_RED:
+ case OP_PCL_RSAPROT_FFF_ENC:
+ case OP_PCL_RSAPROT_FFF_EKT:
+ case OP_PCL_RSAPROT_FFF_TK_ENC:
+ case OP_PCL_RSAPROT_FFF_TK_EKT:
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static inline int __rta_3g_dcrc_proto(uint16_t protoinfo)
+{
+ if (rta_sec_era == RTA_SEC_ERA_7)
+ return -EINVAL;
+
+ switch (protoinfo) {
+ case OP_PCL_3G_DCRC_CRC7:
+ case OP_PCL_3G_DCRC_CRC11:
+ return 0;
+ }
+
+ return -EINVAL;
+}
+
+static inline int __rta_3g_rlc_proto(uint16_t protoinfo)
+{
+ if (rta_sec_era == RTA_SEC_ERA_7)
+ return -EINVAL;
+
+ switch (protoinfo) {
+ case OP_PCL_3G_RLC_NULL:
+ case OP_PCL_3G_RLC_KASUMI:
+ case OP_PCL_3G_RLC_SNOW:
+ return 0;
+ }
+
+ return -EINVAL;
+}
+
+static inline int __rta_lte_pdcp_proto(uint16_t protoinfo)
+{
+ if (rta_sec_era == RTA_SEC_ERA_7)
+ return -EINVAL;
+
+ switch (protoinfo) {
+ case OP_PCL_LTE_ZUC:
+ if (rta_sec_era < RTA_SEC_ERA_5)
+ break;
+ case OP_PCL_LTE_NULL:
+ case OP_PCL_LTE_SNOW:
+ case OP_PCL_LTE_AES:
+ return 0;
+ }
+
+ return -EINVAL;
+}
+
+static inline int __rta_lte_pdcp_mixed_proto(uint16_t protoinfo)
+{
+ switch (protoinfo & OP_PCL_LTE_MIXED_AUTH_MASK) {
+ case OP_PCL_LTE_MIXED_AUTH_NULL:
+ case OP_PCL_LTE_MIXED_AUTH_SNOW:
+ case OP_PCL_LTE_MIXED_AUTH_AES:
+ case OP_PCL_LTE_MIXED_AUTH_ZUC:
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ switch (protoinfo & OP_PCL_LTE_MIXED_ENC_MASK) {
+ case OP_PCL_LTE_MIXED_ENC_NULL:
+ case OP_PCL_LTE_MIXED_ENC_SNOW:
+ case OP_PCL_LTE_MIXED_ENC_AES:
+ case OP_PCL_LTE_MIXED_ENC_ZUC:
+ return 0;
+ }
+
+ return -EINVAL;
+}
+
+struct proto_map {
+ uint32_t optype;
+ uint32_t protid;
+ int (*protoinfo_func)(uint16_t);
+};
+
+static const struct proto_map proto_table[] = {
+/*1*/ {OP_TYPE_UNI_PROTOCOL, OP_PCLID_SSL30_PRF, __rta_ssl_proto},
+ {OP_TYPE_UNI_PROTOCOL, OP_PCLID_TLS10_PRF, __rta_ssl_proto},
+ {OP_TYPE_UNI_PROTOCOL, OP_PCLID_TLS11_PRF, __rta_ssl_proto},
+ {OP_TYPE_UNI_PROTOCOL, OP_PCLID_TLS12_PRF, __rta_ssl_proto},
+ {OP_TYPE_UNI_PROTOCOL, OP_PCLID_DTLS10_PRF, __rta_ssl_proto},
+ {OP_TYPE_UNI_PROTOCOL, OP_PCLID_IKEV1_PRF, __rta_ike_proto},
+ {OP_TYPE_UNI_PROTOCOL, OP_PCLID_IKEV2_PRF, __rta_ike_proto},
+ {OP_TYPE_UNI_PROTOCOL, OP_PCLID_PUBLICKEYPAIR, __rta_dlc_proto},
+ {OP_TYPE_UNI_PROTOCOL, OP_PCLID_DSASIGN, __rta_dlc_proto},
+ {OP_TYPE_UNI_PROTOCOL, OP_PCLID_DSAVERIFY, __rta_dlc_proto},
+ {OP_TYPE_DECAP_PROTOCOL, OP_PCLID_IPSEC, __rta_ipsec_proto},
+ {OP_TYPE_DECAP_PROTOCOL, OP_PCLID_SRTP, __rta_srtp_proto},
+ {OP_TYPE_DECAP_PROTOCOL, OP_PCLID_SSL30, __rta_ssl_proto},
+ {OP_TYPE_DECAP_PROTOCOL, OP_PCLID_TLS10, __rta_ssl_proto},
+ {OP_TYPE_DECAP_PROTOCOL, OP_PCLID_TLS11, __rta_ssl_proto},
+ {OP_TYPE_DECAP_PROTOCOL, OP_PCLID_TLS12, __rta_ssl_proto},
+ {OP_TYPE_DECAP_PROTOCOL, OP_PCLID_DTLS10, __rta_ssl_proto},
+ {OP_TYPE_DECAP_PROTOCOL, OP_PCLID_MACSEC, __rta_macsec_proto},
+ {OP_TYPE_DECAP_PROTOCOL, OP_PCLID_WIFI, __rta_wifi_proto},
+ {OP_TYPE_DECAP_PROTOCOL, OP_PCLID_WIMAX, __rta_wimax_proto},
+/*21*/ {OP_TYPE_DECAP_PROTOCOL, OP_PCLID_BLOB, __rta_blob_proto},
+ {OP_TYPE_UNI_PROTOCOL, OP_PCLID_DIFFIEHELLMAN, __rta_dlc_proto},
+ {OP_TYPE_UNI_PROTOCOL, OP_PCLID_RSAENCRYPT, __rta_rsa_enc_proto},
+ {OP_TYPE_UNI_PROTOCOL, OP_PCLID_RSADECRYPT, __rta_rsa_dec_proto},
+ {OP_TYPE_DECAP_PROTOCOL, OP_PCLID_3G_DCRC, __rta_3g_dcrc_proto},
+ {OP_TYPE_DECAP_PROTOCOL, OP_PCLID_3G_RLC_PDU, __rta_3g_rlc_proto},
+ {OP_TYPE_DECAP_PROTOCOL, OP_PCLID_3G_RLC_SDU, __rta_3g_rlc_proto},
+ {OP_TYPE_DECAP_PROTOCOL, OP_PCLID_LTE_PDCP_USER, __rta_lte_pdcp_proto},
+/*29*/ {OP_TYPE_DECAP_PROTOCOL, OP_PCLID_LTE_PDCP_CTRL, __rta_lte_pdcp_proto},
+ {OP_TYPE_DECAP_PROTOCOL, OP_PCLID_PUBLICKEYPAIR, __rta_dlc_proto},
+/*31*/ {OP_TYPE_DECAP_PROTOCOL, OP_PCLID_DSASIGN, __rta_dlc_proto},
+/*32*/ {OP_TYPE_DECAP_PROTOCOL, OP_PCLID_LTE_PDCP_CTRL_MIXED,
+ __rta_lte_pdcp_mixed_proto},
+ {OP_TYPE_DECAP_PROTOCOL, OP_PCLID_IPSEC_NEW, __rta_ipsec_proto},
+};
+
+/*
+ * Allowed OPERATION protocols for each SEC Era.
+ * Values represent the number of entries from proto_table[] that are supported.
+ */
+static const unsigned proto_table_sz[] = {21, 29, 29, 29, 29, 29, 31, 33};
+
+static inline int rta_proto_operation(struct program *program, uint32_t optype,
+ uint32_t protid, uint16_t protoinfo)
+{
+ uint32_t opcode = CMD_OPERATION;
+ unsigned i, found = 0;
+ uint32_t optype_tmp = optype;
+ unsigned start_pc = program->current_pc;
+ int ret = -EINVAL;
+
+ for (i = 0; i < proto_table_sz[rta_sec_era]; i++) {
+ /* clear last bit in optype to match also decap proto */
+ optype_tmp &= (uint32_t)~(1 << OP_TYPE_SHIFT);
+ if (optype_tmp == proto_table[i].optype) {
+ if (proto_table[i].protid == protid) {
+ /* nothing else to verify */
+ if (proto_table[i].protoinfo_func == NULL) {
+ found = 1;
+ break;
+ }
+ /* check protoinfo */
+ ret = (*proto_table[i].protoinfo_func)
+ (protoinfo);
+ if (ret < 0) {
+ pr_err("PROTO_DESC: Bad PROTO Type. SEC Program Line: %d\n",
+ program->current_pc);
+ goto err;
+ }
+ found = 1;
+ break;
+ }
+ }
+ }
+ if (!found) {
+ pr_err("PROTO_DESC: Operation Type Mismatch. SEC Program Line: %d\n",
+ program->current_pc);
+ goto err;
+ }
+
+ __rta_out32(program, opcode | optype | protid | protoinfo);
+ program->current_instruction++;
+ return (int)start_pc;
+
+ err:
+ program->first_error_pc = start_pc;
+ program->current_instruction++;
+ return ret;
+}
+
+#endif /* __RTA_PROTOCOL_CMD_H__ */
diff --git a/drivers/crypto/caam/flib/rta/sec_run_time_asm.h b/drivers/crypto/caam/flib/rta/sec_run_time_asm.h
new file mode 100644
index 000000000000..d2870ddd922f
--- /dev/null
+++ b/drivers/crypto/caam/flib/rta/sec_run_time_asm.h
@@ -0,0 +1,672 @@
+/* Copyright 2008-2013 Freescale Semiconductor, Inc. */
+
+#ifndef __RTA_SEC_RUN_TIME_ASM_H__
+#define __RTA_SEC_RUN_TIME_ASM_H__
+
+#include "flib/desc.h"
+
+/* flib/compat.h is not delivered in kernel */
+#ifndef __KERNEL__
+#include "flib/compat.h"
+#endif
+
+/**
+ * enum rta_sec_era - SEC HW block revisions supported by the RTA library
+ * @RTA_SEC_ERA_1: SEC Era 1
+ * @RTA_SEC_ERA_2: SEC Era 2
+ * @RTA_SEC_ERA_3: SEC Era 3
+ * @RTA_SEC_ERA_4: SEC Era 4
+ * @RTA_SEC_ERA_5: SEC Era 5
+ * @RTA_SEC_ERA_6: SEC Era 6
+ * @RTA_SEC_ERA_7: SEC Era 7
+ * @RTA_SEC_ERA_8: SEC Era 8
+ * @MAX_SEC_ERA: maximum SEC HW block revision supported by RTA library
+ */
+enum rta_sec_era {
+ RTA_SEC_ERA_1,
+ RTA_SEC_ERA_2,
+ RTA_SEC_ERA_3,
+ RTA_SEC_ERA_4,
+ RTA_SEC_ERA_5,
+ RTA_SEC_ERA_6,
+ RTA_SEC_ERA_7,
+ RTA_SEC_ERA_8,
+ MAX_SEC_ERA = RTA_SEC_ERA_8
+};
+
+/**
+ * DEFAULT_SEC_ERA - the default value for the SEC era in case the user provides
+ * an unsupported value.
+ */
+#define DEFAULT_SEC_ERA MAX_SEC_ERA
+
+/**
+ * USER_SEC_ERA - translates the SEC Era from internal to user representation.
+ * @sec_era: SEC Era in internal (library) representation
+ */
+#define USER_SEC_ERA(sec_era) (sec_era + 1)
+
+/**
+ * INTL_SEC_ERA - translates the SEC Era from user representation to internal.
+ * @sec_era: SEC Era in user representation
+ */
+#define INTL_SEC_ERA(sec_era) (sec_era - 1)
+
+/**
+ * enum rta_jump_type - Types of action taken by JUMP command
+ * @LOCAL_JUMP: conditional jump to an offset within the descriptor buffer
+ * @FAR_JUMP: conditional jump to a location outside the descriptor buffer,
+ * indicated by the POINTER field after the JUMP command.
+ * @HALT: conditional halt - stop the execution of the current descriptor and
+ * writes PKHA / Math condition bits as status / error code.
+ * @HALT_STATUS: conditional halt with user-specified status - stop the
+ * execution of the current descriptor and writes the value of
+ * "LOCAL OFFSET" JUMP field as status / error code.
+ * @GOSUB: conditional subroutine call - similar to @LOCAL_JUMP, but also saves
+ * return address in the Return Address register; subroutine calls
+ * cannot be nested.
+ * @RETURN: conditional subroutine return - similar to @LOCAL_JUMP, but the
+ * offset is taken from the Return Address register.
+ * @LOCAL_JUMP_INC: similar to @LOCAL_JUMP, but increment the register specified
+ * in "SRC_DST" JUMP field before evaluating the jump
+ * condition.
+ * @LOCAL_JUMP_DEC: similar to @LOCAL_JUMP, but decrement the register specified
+ * in "SRC_DST" JUMP field before evaluating the jump
+ * condition.
+ */
+enum rta_jump_type {
+ LOCAL_JUMP,
+ FAR_JUMP,
+ HALT,
+ HALT_STATUS,
+ GOSUB,
+ RETURN,
+ LOCAL_JUMP_INC,
+ LOCAL_JUMP_DEC
+};
+
+/**
+ * enum rta_jump_cond - How test conditions are evaluated by JUMP command
+ * @ALL_TRUE: perform action if ALL selected conditions are true
+ * @ALL_FALSE: perform action if ALL selected conditions are false
+ * @ANY_TRUE: perform action if ANY of the selected conditions is true
+ * @ANY_FALSE: perform action if ANY of the selected conditions is false
+ */
+enum rta_jump_cond {
+ ALL_TRUE,
+ ALL_FALSE,
+ ANY_TRUE,
+ ANY_FALSE
+};
+
+/**
+ * enum rta_share_type - Types of sharing for JOB_HDR and SHR_HDR commands
+ * @SHR_NEVER: nothing is shared; descriptors can execute in parallel (i.e. no
+ * dependencies are allowed between them).
+ * @SHR_WAIT: shared descriptor and keys are shared once the descriptor sets
+ * "OK to share" in DECO Control Register (DCTRL).
+ * @SHR_SERIAL: shared descriptor and keys are shared once the descriptor has
+ * completed.
+ * @SHR_ALWAYS: shared descriptor is shared anytime after the descriptor is
+ * loaded.
+ * @SHR_DEFER: valid only for JOB_HDR; sharing type is the one specified
+ * in the shared descriptor associated with the job descriptor.
+ */
+enum rta_share_type {
+ SHR_NEVER,
+ SHR_WAIT,
+ SHR_SERIAL,
+ SHR_ALWAYS,
+ SHR_DEFER
+};
+
+/* Registers definitions */
+enum rta_regs {
+ /* CCB Registers */
+ CONTEXT1 = 1,
+ CONTEXT2,
+ KEY1,
+ KEY2,
+ KEY1SZ,
+ KEY2SZ,
+ ICV1SZ,
+ ICV2SZ,
+ DATA1SZ,
+ DATA2SZ,
+ ALTDS1,
+ IV1SZ,
+ AAD1SZ,
+ MODE1,
+ MODE2,
+ CCTRL,
+ DCTRL,
+ ICTRL,
+ CLRW,
+ CSTAT,
+ IFIFO,
+ NFIFO,
+ OFIFO,
+ PKASZ,
+ PKBSZ,
+ PKNSZ,
+ PKESZ,
+ /* DECO Registers */
+ MATH0,
+ MATH1,
+ MATH2,
+ MATH3,
+ DESCBUF,
+ JOBDESCBUF,
+ SHAREDESCBUF,
+ DPOVRD,
+ DJQDA,
+ DSTAT,
+ DPID,
+ DJQCTRL,
+ ALTSOURCE,
+ SEQINSZ,
+ SEQOUTSZ,
+ VSEQINSZ,
+ VSEQOUTSZ,
+ /* PKHA Registers */
+ PKA,
+ PKN,
+ PKA0,
+ PKA1,
+ PKA2,
+ PKA3,
+ PKB,
+ PKB0,
+ PKB1,
+ PKB2,
+ PKB3,
+ PKE,
+ /* Pseudo registers */
+ AB1,
+ AB2,
+ ABD,
+ IFIFOABD,
+ IFIFOAB1,
+ IFIFOAB2,
+ AFHA_SBOX,
+ MDHA_SPLIT_KEY,
+ JOBSRC,
+ ZERO,
+ ONE,
+ AAD1,
+ IV1,
+ IV2,
+ MSG1,
+ MSG2,
+ MSG,
+ MSGOUTSNOOP,
+ MSGINSNOOP,
+ ICV1,
+ ICV2,
+ SKIP,
+ NONE,
+ RNGOFIFO,
+ RNG,
+ IDFNS,
+ ODFNS,
+ NFIFOSZ,
+ SZ,
+ PAD,
+ SAD1,
+ AAD2,
+ BIT_DATA,
+ NFIFO_SZL,
+ NFIFO_SZM,
+ NFIFO_L,
+ NFIFO_M,
+ SZL,
+ SZM,
+ JOBDESCBUF_EFF,
+ SHAREDESCBUF_EFF,
+ METADATA,
+ GTR,
+ STR,
+ OFIFO_SYNC,
+ MSGOUTSNOOP_ALT
+};
+
+/* Command flags */
+#define FLUSH1 BIT(0)
+#define LAST1 BIT(1)
+#define LAST2 BIT(2)
+#define IMMED BIT(3)
+#define SGF BIT(4)
+#define VLF BIT(5)
+#define EXT BIT(6)
+#define CONT BIT(7)
+#define SEQ BIT(8)
+#define AIDF BIT(9)
+#define FLUSH2 BIT(10)
+#define CLASS1 BIT(11)
+#define CLASS2 BIT(12)
+#define BOTH BIT(13)
+
+/**
+ * DCOPY - (AIOP only) command param is pointer to external memory
+ *
+ * CDMA must be used to transfer the key via DMA into Workspace Area.
+ * Valid only in combination with IMMED flag.
+ */
+#define DCOPY BIT(30)
+
+#define COPY BIT(31) /*command param is pointer (not immediate)
+ valid only in combination when IMMED */
+
+#define __COPY_MASK (COPY | DCOPY)
+
+/* SEQ IN/OUT PTR Command specific flags */
+#define RBS BIT(16)
+#define INL BIT(17)
+#define PRE BIT(18)
+#define RTO BIT(19)
+#define RJD BIT(20)
+#define SOP BIT(21)
+#define RST BIT(22)
+#define EWS BIT(23)
+
+#define ENC BIT(14) /* Encrypted Key */
+#define EKT BIT(15) /* AES CCM Encryption (default is
+ * AES ECB Encryption) */
+#define TK BIT(16) /* Trusted Descriptor Key (default is
+ * Job Descriptor Key) */
+#define NWB BIT(17) /* No Write Back Key */
+#define PTS BIT(18) /* Plaintext Store */
+
+/* HEADER Command specific flags */
+#define RIF BIT(16)
+#define DNR BIT(17)
+#define CIF BIT(18)
+#define PD BIT(19)
+#define RSMS BIT(20)
+#define TD BIT(21)
+#define MTD BIT(22)
+#define REO BIT(23)
+#define SHR BIT(24)
+#define SC BIT(25)
+/* Extended HEADER specific flags */
+#define DSV BIT(7)
+#define DSEL_MASK 0x00000007 /* DECO Select */
+#define FTD BIT(8)
+
+/* JUMP Command specific flags */
+#define NIFP BIT(20)
+#define NIP BIT(21)
+#define NOP BIT(22)
+#define NCP BIT(23)
+#define CALM BIT(24)
+
+#define MATH_Z BIT(25)
+#define MATH_N BIT(26)
+#define MATH_NV BIT(27)
+#define MATH_C BIT(28)
+#define PK_0 BIT(29)
+#define PK_GCD_1 BIT(30)
+#define PK_PRIME BIT(31)
+#define SELF BIT(0)
+#define SHRD BIT(1)
+#define JQP BIT(2)
+
+/* NFIFOADD specific flags */
+#define PAD_ZERO BIT(16)
+#define PAD_NONZERO BIT(17)
+#define PAD_INCREMENT BIT(18)
+#define PAD_RANDOM BIT(19)
+#define PAD_ZERO_N1 BIT(20)
+#define PAD_NONZERO_0 BIT(21)
+#define PAD_N1 BIT(23)
+#define PAD_NONZERO_N BIT(24)
+#define OC BIT(25)
+#define BM BIT(26)
+#define PR BIT(27)
+#define PS BIT(28)
+#define BP BIT(29)
+
+/* MOVE Command specific flags */
+#define WAITCOMP BIT(16)
+#define SIZE_WORD BIT(17)
+#define SIZE_BYTE BIT(18)
+#define SIZE_DWORD BIT(19)
+
+/* MATH command specific flags */
+#define IFB MATH_IFB
+#define NFU MATH_NFU
+#define STL MATH_STL
+#define SSEL MATH_SSEL
+#define SWP MATH_SWP
+#define IMMED2 BIT(31)
+
+/**
+ * struct program - descriptor buffer management structure
+ * @current_pc: current offset in descriptor
+ * @current_instruction: current instruction in descriptor
+ * @first_error_pc: offset of the first error in descriptor
+ * @start_pc: start offset in descriptor buffer
+ * @buffer: buffer carrying descriptor
+ * @shrhdr: shared descriptor header
+ * @jobhdr: job descriptor header
+ * @ps: pointer fields size; if ps is true, pointers will be 36bits in
+ * length; if ps is false, pointers will be 32bits in length
+ * @bswap: if true, perform byte swap on a 4-byte boundary
+ */
+struct program {
+ unsigned current_pc;
+ unsigned current_instruction;
+ unsigned first_error_pc;
+ unsigned start_pc;
+ uint32_t *buffer;
+ uint32_t *shrhdr;
+ uint32_t *jobhdr;
+ bool ps;
+ bool bswap;
+};
+
+static inline void rta_program_cntxt_init(struct program *program,
+ uint32_t *buffer, unsigned offset)
+{
+ program->current_pc = 0;
+ program->current_instruction = 0;
+ program->first_error_pc = 0;
+ program->start_pc = offset;
+ program->buffer = buffer;
+ program->shrhdr = NULL;
+ program->jobhdr = NULL;
+ program->ps = false;
+ program->bswap = false;
+}
+
+static inline void __rta__desc_bswap(uint32_t *buff, unsigned buff_len)
+{
+ unsigned i;
+
+ for (i = 0; i < buff_len; i++)
+ buff[i] = swab32(buff[i]);
+}
+
+static inline unsigned rta_program_finalize(struct program *program)
+{
+ /* Descriptor is not allowed to go beyond 64 words size */
+ if (program->current_pc > MAX_CAAM_DESCSIZE)
+ pr_err("Descriptor Size exceeded max limit of 64 words\n");
+
+ /* Descriptor is erroneous */
+ if (program->first_error_pc)
+ pr_err("Descriptor creation error\n");
+
+ /* Update descriptor length in shared and job descriptor headers */
+ if (program->shrhdr != NULL) {
+ *program->shrhdr |= program->current_pc;
+ if (program->bswap)
+ __rta__desc_bswap(program->shrhdr, program->current_pc);
+ } else if (program->jobhdr != NULL) {
+ *program->jobhdr |= program->current_pc;
+ if (program->bswap)
+ __rta__desc_bswap(program->jobhdr, program->current_pc);
+ }
+
+ return program->current_pc;
+}
+
+static inline unsigned rta_program_set_36bit_addr(struct program *program)
+{
+ program->ps = true;
+ return program->current_pc;
+}
+
+static inline unsigned rta_program_set_bswap(struct program *program)
+{
+ program->bswap = true;
+ return program->current_pc;
+}
+
+static inline void __rta_out32(struct program *program, uint32_t val)
+{
+ program->buffer[program->current_pc] = val;
+ program->current_pc++;
+}
+
+static inline void __rta_out64(struct program *program, bool is_ext,
+ uint64_t val)
+{
+ if (is_ext)
+ __rta_out32(program, upper_32_bits(val));
+
+ __rta_out32(program, lower_32_bits(val));
+}
+
+static inline unsigned rta_word(struct program *program, uint32_t val)
+{
+ unsigned start_pc = program->current_pc;
+
+ __rta_out32(program, val);
+
+ return start_pc;
+}
+
+static inline unsigned rta_dword(struct program *program, uint64_t val)
+{
+ unsigned start_pc = program->current_pc;
+
+ __rta_out64(program, true, val);
+
+ return start_pc;
+}
+
+static inline unsigned rta_copy_data(struct program *program, uint8_t *data,
+ unsigned length)
+{
+ unsigned i;
+ unsigned start_pc = program->current_pc;
+ uint8_t *tmp = (uint8_t *)&program->buffer[program->current_pc];
+
+ for (i = 0; i < length; i++)
+ *tmp++ = data[i];
+ program->current_pc += (length + 3) / 4;
+
+ return start_pc;
+}
+
+#if defined(__EWL__) && defined(AIOP)
+static inline void __rta_dma_data(void *ws_dst, uint64_t ext_address,
+ uint16_t size)
+{ cdma_read(ws_dst, ext_address, size); }
+#else
+static inline void __rta_dma_data(void *ws_dst, uint64_t ext_address,
+ uint16_t size)
+{ pr_warn("RTA: DCOPY not supported, DMA will be skipped\n"); }
+#endif /* defined(__EWL__) && defined(AIOP) */
+
+static inline void __rta_inline_data(struct program *program, uint64_t data,
+ uint32_t copy_data, uint32_t length)
+{
+ if (!copy_data) {
+ __rta_out64(program, length > 4, data);
+ } else if (copy_data & COPY) {
+ uint8_t *tmp = (uint8_t *)&program->buffer[program->current_pc];
+ uint32_t i;
+
+ for (i = 0; i < length; i++)
+ *tmp++ = ((uint8_t *)(uintptr_t)data)[i];
+ program->current_pc += ((length + 3) / 4);
+ } else if (copy_data & DCOPY) {
+ __rta_dma_data(&program->buffer[program->current_pc], data,
+ (uint16_t)length);
+ program->current_pc += ((length + 3) / 4);
+ }
+}
+
+static inline unsigned rta_desc_len(uint32_t *buffer)
+{
+ if ((*buffer & CMD_MASK) == CMD_DESC_HDR)
+ return *buffer & HDR_DESCLEN_MASK;
+ else
+ return *buffer & HDR_DESCLEN_SHR_MASK;
+}
+
+static inline unsigned rta_desc_bytes(uint32_t *buffer)
+{
+ return (unsigned)(rta_desc_len(buffer) * CAAM_CMD_SZ);
+}
+
+static inline unsigned rta_set_label(struct program *program)
+{
+ return program->current_pc + program->start_pc;
+}
+
+static inline int rta_patch_move(struct program *program, int line,
+ unsigned new_ref, bool check_swap)
+{
+ uint32_t opcode;
+ bool bswap = check_swap && program->bswap;
+
+ if (line < 0)
+ return -EINVAL;
+
+ opcode = bswap ? swab32(program->buffer[line]) : program->buffer[line];
+
+ opcode &= (uint32_t)~MOVE_OFFSET_MASK;
+ opcode |= (new_ref << (MOVE_OFFSET_SHIFT + 2)) & MOVE_OFFSET_MASK;
+ program->buffer[line] = bswap ? swab32(opcode) : opcode;
+
+ return 0;
+}
+
+static inline int rta_patch_jmp(struct program *program, int line,
+ unsigned new_ref, bool check_swap)
+{
+ uint32_t opcode;
+ bool bswap = check_swap && program->bswap;
+
+ if (line < 0)
+ return -EINVAL;
+
+ opcode = bswap ? swab32(program->buffer[line]) : program->buffer[line];
+
+ opcode &= (uint32_t)~JUMP_OFFSET_MASK;
+ opcode |= (new_ref - (line + program->start_pc)) & JUMP_OFFSET_MASK;
+ program->buffer[line] = bswap ? swab32(opcode) : opcode;
+
+ return 0;
+}
+
+static inline int rta_patch_header(struct program *program, int line,
+ unsigned new_ref, bool check_swap)
+{
+ uint32_t opcode;
+ bool bswap = check_swap && program->bswap;
+
+ if (line < 0)
+ return -EINVAL;
+
+ opcode = bswap ? swab32(program->buffer[line]) : program->buffer[line];
+
+ opcode &= (uint32_t)~HDR_START_IDX_MASK;
+ opcode |= (new_ref << HDR_START_IDX_SHIFT) & HDR_START_IDX_MASK;
+ program->buffer[line] = bswap ? swab32(opcode) : opcode;
+
+ return 0;
+}
+
+static inline int rta_patch_load(struct program *program, int line,
+ unsigned new_ref)
+{
+ uint32_t opcode;
+
+ if (line < 0)
+ return -EINVAL;
+
+ opcode = program->buffer[line] & (uint32_t)~LDST_OFFSET_MASK;
+
+ if (opcode & (LDST_SRCDST_WORD_DESCBUF | LDST_CLASS_DECO))
+ opcode |= (new_ref << LDST_OFFSET_SHIFT) & LDST_OFFSET_MASK;
+ else
+ opcode |= (new_ref << (LDST_OFFSET_SHIFT + 2)) &
+ LDST_OFFSET_MASK;
+
+ program->buffer[line] = opcode;
+
+ return 0;
+}
+
+static inline int rta_patch_store(struct program *program, int line,
+ unsigned new_ref, bool check_swap)
+{
+ uint32_t opcode;
+ bool bswap = check_swap && program->bswap;
+
+ if (line < 0)
+ return -EINVAL;
+
+ opcode = bswap ? swab32(program->buffer[line]) : program->buffer[line];
+
+ opcode &= (uint32_t)~LDST_OFFSET_MASK;
+
+ switch (opcode & LDST_SRCDST_MASK) {
+ case LDST_SRCDST_WORD_DESCBUF:
+ case LDST_SRCDST_WORD_DESCBUF_JOB:
+ case LDST_SRCDST_WORD_DESCBUF_SHARED:
+ case LDST_SRCDST_WORD_DESCBUF_JOB_WE:
+ case LDST_SRCDST_WORD_DESCBUF_SHARED_WE:
+ opcode |= ((new_ref) << LDST_OFFSET_SHIFT) & LDST_OFFSET_MASK;
+ break;
+ default:
+ opcode |= (new_ref << (LDST_OFFSET_SHIFT + 2)) &
+ LDST_OFFSET_MASK;
+ }
+
+ program->buffer[line] = bswap ? swab32(opcode) : opcode;
+
+ return 0;
+}
+
+static inline int rta_patch_raw(struct program *program, int line,
+ unsigned mask, unsigned new_val,
+ bool check_swap)
+{
+ uint32_t opcode;
+ bool bswap = check_swap && program->bswap;
+
+ if (line < 0)
+ return -EINVAL;
+
+ opcode = bswap ? swab32(program->buffer[line]) : program->buffer[line];
+
+ opcode &= (uint32_t)~mask;
+ opcode |= new_val & mask;
+ program->buffer[line] = bswap ? swab32(opcode) : opcode;
+
+ return 0;
+}
+
+static inline int __rta_map_opcode(uint32_t name,
+ const uint32_t (*map_table)[2],
+ unsigned num_of_entries, uint32_t *val)
+{
+ unsigned i;
+
+ for (i = 0; i < num_of_entries; i++)
+ if (map_table[i][0] == name) {
+ *val = map_table[i][1];
+ return 0;
+ }
+
+ return -EINVAL;
+}
+
+static inline void __rta_map_flags(uint32_t flags,
+ const uint32_t (*flags_table)[2],
+ unsigned num_of_entries, uint32_t *opcode)
+{
+ unsigned i;
+
+ for (i = 0; i < num_of_entries; i++) {
+ if (flags_table[i][0] & flags)
+ *opcode |= flags_table[i][1];
+ }
+}
+
+#endif /* __RTA_SEC_RUN_TIME_ASM_H__ */
--
1.8.3.1
Horia Geanta
2014-08-14 12:54:29 UTC
Permalink
Replace desc.h with a newer version from RTA.
While here, add also PROTOCOL command support
(not added in part 2 due to patch size limitations).

Signed-off-by: Horia Geanta <***@freescale.com>
Signed-off-by: Carmen Iorga <***@freescale.com>
---
drivers/crypto/caam/Makefile | 4 +-
drivers/crypto/caam/compat.h | 1 +
drivers/crypto/caam/desc_constr.h | 6 +-
drivers/crypto/caam/error.c | 2 +-
drivers/crypto/caam/{ => flib}/desc.h | 1317 ++++++++++++++++++++++++++-----
drivers/crypto/caam/flib/desc/common.h | 151 ++++
drivers/crypto/caam/flib/desc/jobdesc.h | 57 ++
drivers/crypto/caam/jr.c | 2 +-
8 files changed, 1340 insertions(+), 200 deletions(-)
rename drivers/crypto/caam/{ => flib}/desc.h (54%)
create mode 100644 drivers/crypto/caam/flib/desc/common.h
create mode 100644 drivers/crypto/caam/flib/desc/jobdesc.h

diff --git a/drivers/crypto/caam/Makefile b/drivers/crypto/caam/Makefile
index 550758a333e7..10a97a8a8391 100644
--- a/drivers/crypto/caam/Makefile
+++ b/drivers/crypto/caam/Makefile
@@ -2,9 +2,11 @@
# Makefile for the CAAM backend and dependent components
#
ifeq ($(CONFIG_CRYPTO_DEV_FSL_CAAM_DEBUG), y)
- EXTRA_CFLAGS := -DDEBUG
+ ccflags-y := -DDEBUG
endif

+ccflags-y += -I$(src)
+
obj-$(CONFIG_CRYPTO_DEV_FSL_CAAM) += caam.o
obj-$(CONFIG_CRYPTO_DEV_FSL_CAAM_JR) += caam_jr.o
obj-$(CONFIG_CRYPTO_DEV_FSL_CAAM_CRYPTO_API) += caamalg.o
diff --git a/drivers/crypto/caam/compat.h b/drivers/crypto/caam/compat.h
index f227922cea38..8fe0f6993ab0 100644
--- a/drivers/crypto/caam/compat.h
+++ b/drivers/crypto/caam/compat.h
@@ -23,6 +23,7 @@
#include <linux/types.h>
#include <linux/debugfs.h>
#include <linux/circ_buf.h>
+#include <linux/bitops.h>
#include <net/xfrm.h>

#include <crypto/algapi.h>
diff --git a/drivers/crypto/caam/desc_constr.h b/drivers/crypto/caam/desc_constr.h
index 7eec20bb3849..c344fbce1c67 100644
--- a/drivers/crypto/caam/desc_constr.h
+++ b/drivers/crypto/caam/desc_constr.h
@@ -4,13 +4,9 @@
* Copyright 2008-2012 Freescale Semiconductor, Inc.
*/

-#include "desc.h"
+#include "flib/desc.h"

#define IMMEDIATE (1 << 23)
-#define CAAM_CMD_SZ sizeof(u32)
-#define CAAM_PTR_SZ sizeof(dma_addr_t)
-#define CAAM_DESC_BYTES_MAX (CAAM_CMD_SZ * MAX_CAAM_DESCSIZE)
-#define DESC_JOB_IO_LEN (CAAM_CMD_SZ * 5 + CAAM_PTR_SZ * 3)

#ifdef DEBUG
#define PRINT_POS do { printk(KERN_DEBUG "%02d: %s\n", desc_len(desc),\
diff --git a/drivers/crypto/caam/error.c b/drivers/crypto/caam/error.c
index 7d6ed4722345..5daa9cd4109a 100644
--- a/drivers/crypto/caam/error.c
+++ b/drivers/crypto/caam/error.c
@@ -7,7 +7,7 @@
#include "compat.h"
#include "regs.h"
#include "intern.h"
-#include "desc.h"
+#include "flib/desc.h"
#include "jr.h"
#include "error.h"

diff --git a/drivers/crypto/caam/desc.h b/drivers/crypto/caam/flib/desc.h
similarity index 54%
rename from drivers/crypto/caam/desc.h
rename to drivers/crypto/caam/flib/desc.h
index eb8b870d03a9..9e669627af47 100644
--- a/drivers/crypto/caam/desc.h
+++ b/drivers/crypto/caam/flib/desc.h
@@ -1,16 +1,26 @@
/*
- * CAAM descriptor composition header
- * Definitions to support CAAM descriptor instruction generation
+ * SEC descriptor composition header.
+ * Definitions to support SEC descriptor instruction generation
*
- * Copyright 2008-2011 Freescale Semiconductor, Inc.
+ * Copyright 2008-2013 Freescale Semiconductor, Inc.
*/

-#ifndef DESC_H
-#define DESC_H
+#ifndef __RTA_DESC_H__
+#define __RTA_DESC_H__

-/* Max size of any CAAM descriptor in 32-bit words, inclusive of header */
+/* flib/compat.h is not delivered in kernel */
+#ifndef __KERNEL__
+#include "flib/compat.h"
+#endif
+
+/* Max size of any SEC descriptor in 32-bit words, inclusive of header */
#define MAX_CAAM_DESCSIZE 64

+#define CAAM_CMD_SZ sizeof(uint32_t)
+#define CAAM_PTR_SZ sizeof(dma_addr_t)
+#define CAAM_DESC_BYTES_MAX (CAAM_CMD_SZ * MAX_CAAM_DESCSIZE)
+#define DESC_JOB_IO_LEN (CAAM_CMD_SZ * 5 + CAAM_PTR_SZ * 3)
+
/* Block size of any entity covered/uncovered with a KEK/TKEK */
#define KEK_BLOCKSIZE 16

@@ -19,7 +29,7 @@
* inside a descriptor command word.
*/
#define CMD_SHIFT 27
-#define CMD_MASK 0xf8000000
+#define CMD_MASK (0x1f << CMD_SHIFT)

#define CMD_KEY (0x00 << CMD_SHIFT)
#define CMD_SEQ_KEY (0x01 << CMD_SHIFT)
@@ -27,20 +37,23 @@
#define CMD_SEQ_LOAD (0x03 << CMD_SHIFT)
#define CMD_FIFO_LOAD (0x04 << CMD_SHIFT)
#define CMD_SEQ_FIFO_LOAD (0x05 << CMD_SHIFT)
+#define CMD_MOVEDW (0x06 << CMD_SHIFT)
+#define CMD_MOVEB (0x07 << CMD_SHIFT)
#define CMD_STORE (0x0a << CMD_SHIFT)
#define CMD_SEQ_STORE (0x0b << CMD_SHIFT)
#define CMD_FIFO_STORE (0x0c << CMD_SHIFT)
#define CMD_SEQ_FIFO_STORE (0x0d << CMD_SHIFT)
#define CMD_MOVE_LEN (0x0e << CMD_SHIFT)
#define CMD_MOVE (0x0f << CMD_SHIFT)
-#define CMD_OPERATION (0x10 << CMD_SHIFT)
-#define CMD_SIGNATURE (0x12 << CMD_SHIFT)
-#define CMD_JUMP (0x14 << CMD_SHIFT)
-#define CMD_MATH (0x15 << CMD_SHIFT)
-#define CMD_DESC_HDR (0x16 << CMD_SHIFT)
-#define CMD_SHARED_DESC_HDR (0x17 << CMD_SHIFT)
-#define CMD_SEQ_IN_PTR (0x1e << CMD_SHIFT)
-#define CMD_SEQ_OUT_PTR (0x1f << CMD_SHIFT)
+#define CMD_OPERATION ((uint32_t)(0x10 << CMD_SHIFT))
+#define CMD_SIGNATURE ((uint32_t)(0x12 << CMD_SHIFT))
+#define CMD_JUMP ((uint32_t)(0x14 << CMD_SHIFT))
+#define CMD_MATH ((uint32_t)(0x15 << CMD_SHIFT))
+#define CMD_DESC_HDR ((uint32_t)(0x16 << CMD_SHIFT))
+#define CMD_SHARED_DESC_HDR ((uint32_t)(0x17 << CMD_SHIFT))
+#define CMD_MATHI ((uint32_t)(0x1d << CMD_SHIFT))
+#define CMD_SEQ_IN_PTR ((uint32_t)(0x1e << CMD_SHIFT))
+#define CMD_SEQ_OUT_PTR ((uint32_t)(0x1f << CMD_SHIFT))

/* General-purpose class selector for all commands */
#define CLASS_SHIFT 25
@@ -51,23 +64,47 @@
#define CLASS_2 (0x02 << CLASS_SHIFT)
#define CLASS_BOTH (0x03 << CLASS_SHIFT)

+/* ICV Check bits for Algo Operation command */
+#define ICV_CHECK_DISABLE 0
+#define ICV_CHECK_ENABLE 1
+
+
+/* Encap Mode check bits for Algo Operation command */
+#define DIR_ENC 1
+#define DIR_DEC 0
+
/*
* Descriptor header command constructs
* Covers shared, job, and trusted descriptor headers
*/

/*
- * Do Not Run - marks a descriptor inexecutable if there was
+ * Extended Job Descriptor Header
+ */
+#define HDR_EXT BIT(24)
+
+/*
+ * Read input frame as soon as possible (SHR HDR)
+ */
+#define HDR_RIF BIT(25)
+
+/*
+ * Require SEQ LIODN to be the Same (JOB HDR)
+ */
+#define HDR_RSLS BIT(25)
+
+/*
+ * Do Not Run - marks a descriptor not executable if there was
* a preceding error somewhere
*/
-#define HDR_DNR 0x01000000
+#define HDR_DNR BIT(24)

/*
* ONE - should always be set. Combination of ONE (always
* set) and ZRO (always clear) forms an endianness sanity check
*/
-#define HDR_ONE 0x00800000
-#define HDR_ZRO 0x00008000
+#define HDR_ONE BIT(23)
+#define HDR_ZRO BIT(15)

/* Start Index or SharedDesc Length */
#define HDR_START_IDX_SHIFT 16
@@ -80,25 +117,34 @@
#define HDR_DESCLEN_MASK 0x7f

/* This is a TrustedDesc (if not SharedDesc) */
-#define HDR_TRUSTED 0x00004000
+#define HDR_TRUSTED BIT(14)

/* Make into TrustedDesc (if not SharedDesc) */
-#define HDR_MAKE_TRUSTED 0x00002000
+#define HDR_MAKE_TRUSTED BIT(13)
+
+/* Clear Input FiFO (if SharedDesc) */
+#define HDR_CLEAR_IFIFO BIT(13)

/* Save context if self-shared (if SharedDesc) */
-#define HDR_SAVECTX 0x00001000
+#define HDR_SAVECTX BIT(12)

/* Next item points to SharedDesc */
-#define HDR_SHARED 0x00001000
+#define HDR_SHARED BIT(12)

/*
* Reverse Execution Order - execute JobDesc first, then
* execute SharedDesc (normally SharedDesc goes first).
*/
-#define HDR_REVERSE 0x00000800
+#define HDR_REVERSE BIT(11)
+
+/* Propagate DNR property to SharedDesc */
+#define HDR_PROP_DNR BIT(11)

-/* Propogate DNR property to SharedDesc */
-#define HDR_PROP_DNR 0x00000800
+/* DECO Select Valid */
+#define HDR_EXT_DSEL_VALID BIT(7)
+
+/* Fake trusted descriptor */
+#define HDR_EXT_FTD BIT(8)

/* JobDesc/SharedDesc share property */
#define HDR_SD_SHARE_SHIFT 8
@@ -121,40 +167,53 @@
*/

/* Key Destination Class: 01 = Class 1, 02 - Class 2 */
-#define KEY_DEST_CLASS_SHIFT 25 /* use CLASS_1 or CLASS_2 */
+#define KEY_DEST_CLASS_SHIFT 25
#define KEY_DEST_CLASS_MASK (0x03 << KEY_DEST_CLASS_SHIFT)
+#define KEY_DEST_CLASS1 (1 << KEY_DEST_CLASS_SHIFT)
+#define KEY_DEST_CLASS2 (2 << KEY_DEST_CLASS_SHIFT)

/* Scatter-Gather Table/Variable Length Field */
-#define KEY_SGF 0x01000000
-#define KEY_VLF 0x01000000
+#define KEY_SGF BIT(24)
+#define KEY_VLF BIT(24)

/* Immediate - Key follows command in the descriptor */
-#define KEY_IMM 0x00800000
+#define KEY_IMM BIT(23)
+
+/*
+ * Already in Input Data FIFO - the Input Data Sequence is not read, since it is
+ * already in the Input Data FIFO.
+ */
+#define KEY_AIDF BIT(23)

/*
* Encrypted - Key is encrypted either with the KEK, or
- * with the TDKEK if TK is set
+ * with the TDKEK if this descriptor is trusted
*/
-#define KEY_ENC 0x00400000
+#define KEY_ENC BIT(22)

/*
* No Write Back - Do not allow key to be FIFO STOREd
*/
-#define KEY_NWB 0x00200000
+#define KEY_NWB BIT(21)

/*
* Enhanced Encryption of Key
*/
-#define KEY_EKT 0x00100000
+#define KEY_EKT BIT(20)

/*
* Encrypted with Trusted Key
*/
-#define KEY_TK 0x00008000
+#define KEY_TK BIT(15)
+
+/*
+ * Plaintext Store
+ */
+#define KEY_PTS BIT(14)

/*
* KDEST - Key Destination: 0 - class key register,
- * 1 - PKHA 'e', 2 - AFHA Sbox, 3 - MDHA split-key
+ * 1 - PKHA 'e', 2 - AFHA Sbox, 3 - MDHA split key
*/
#define KEY_DEST_SHIFT 16
#define KEY_DEST_MASK (0x03 << KEY_DEST_SHIFT)
@@ -183,13 +242,13 @@
#define LDST_CLASS_DECO (0x03 << LDST_CLASS_SHIFT)

/* Scatter-Gather Table/Variable Length Field */
-#define LDST_SGF 0x01000000
-#define LDST_VLF LDST_SGF
+#define LDST_SGF BIT(24)
+#define LDST_VLF BIT(24)

/* Immediate - Key follows this command in descriptor */
#define LDST_IMM_MASK 1
#define LDST_IMM_SHIFT 23
-#define LDST_IMM (LDST_IMM_MASK << LDST_IMM_SHIFT)
+#define LDST_IMM BIT(23)

/* SRC/DST - Destination for LOAD, Source for STORE */
#define LDST_SRCDST_SHIFT 16
@@ -201,9 +260,13 @@
#define LDST_SRCDST_BYTE_OUTFIFO (0x7e << LDST_SRCDST_SHIFT)

#define LDST_SRCDST_WORD_MODE_REG (0x00 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_JQCTRL (0x00 << LDST_SRCDST_SHIFT)
#define LDST_SRCDST_WORD_KEYSZ_REG (0x01 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_JQDAR (0x01 << LDST_SRCDST_SHIFT)
#define LDST_SRCDST_WORD_DATASZ_REG (0x02 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_DECO_STAT (0x02 << LDST_SRCDST_SHIFT)
#define LDST_SRCDST_WORD_ICVSZ_REG (0x03 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_PID (0x04 << LDST_SRCDST_SHIFT)
#define LDST_SRCDST_WORD_CHACTRL (0x06 << LDST_SRCDST_SHIFT)
#define LDST_SRCDST_WORD_DECOCTRL (0x06 << LDST_SRCDST_SHIFT)
#define LDST_SRCDST_WORD_IRQCTRL (0x07 << LDST_SRCDST_SHIFT)
@@ -218,15 +281,26 @@
#define LDST_SRCDST_WORD_CLASS1_IV_SZ (0x0c << LDST_SRCDST_SHIFT)
#define LDST_SRCDST_WORD_ALTDS_CLASS1 (0x0f << LDST_SRCDST_SHIFT)
#define LDST_SRCDST_WORD_PKHA_A_SZ (0x10 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_GTR (0x10 << LDST_SRCDST_SHIFT)
#define LDST_SRCDST_WORD_PKHA_B_SZ (0x11 << LDST_SRCDST_SHIFT)
#define LDST_SRCDST_WORD_PKHA_N_SZ (0x12 << LDST_SRCDST_SHIFT)
#define LDST_SRCDST_WORD_PKHA_E_SZ (0x13 << LDST_SRCDST_SHIFT)
#define LDST_SRCDST_WORD_CLASS_CTX (0x20 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_STR (0x20 << LDST_SRCDST_SHIFT)
#define LDST_SRCDST_WORD_DESCBUF (0x40 << LDST_SRCDST_SHIFT)
#define LDST_SRCDST_WORD_DESCBUF_JOB (0x41 << LDST_SRCDST_SHIFT)
#define LDST_SRCDST_WORD_DESCBUF_SHARED (0x42 << LDST_SRCDST_SHIFT)
#define LDST_SRCDST_WORD_DESCBUF_JOB_WE (0x45 << LDST_SRCDST_SHIFT)
#define LDST_SRCDST_WORD_DESCBUF_SHARED_WE (0x46 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_INFO_FIFO_SZL (0x70 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_INFO_FIFO_SZM (0x71 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_INFO_FIFO_L (0x72 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_INFO_FIFO_M (0x73 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_SZL (0x74 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_SZM (0x75 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_IFNSR (0x76 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_OFNSR (0x77 << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_BYTE_ALTSOURCE (0x78 << LDST_SRCDST_SHIFT)
#define LDST_SRCDST_WORD_INFO_FIFO (0x7a << LDST_SRCDST_SHIFT)

/* Offset in source/destination */
@@ -241,8 +315,8 @@
#define LDOFF_CHG_SHARE_OK_PROP (0x2 << LDOFF_CHG_SHARE_SHIFT)
#define LDOFF_CHG_SHARE_OK_NO_PROP (0x3 << LDOFF_CHG_SHARE_SHIFT)

-#define LDOFF_ENABLE_AUTO_NFIFO (1 << 2)
-#define LDOFF_DISABLE_AUTO_NFIFO (1 << 3)
+#define LDOFF_ENABLE_AUTO_NFIFO BIT(2)
+#define LDOFF_DISABLE_AUTO_NFIFO BIT(3)

#define LDOFF_CHG_NONSEQLIODN_SHIFT 4
#define LDOFF_CHG_NONSEQLIODN_MASK (0x3 << LDOFF_CHG_NONSEQLIODN_SHIFT)
@@ -256,19 +330,94 @@
#define LDOFF_CHG_SEQLIODN_NON_SEQ (0x2 << LDOFF_CHG_SEQLIODN_SHIFT)
#define LDOFF_CHG_SEQLIODN_TRUSTED (0x3 << LDOFF_CHG_SEQLIODN_SHIFT)

-/* Data length in bytes */
+/* Data length in bytes */
#define LDST_LEN_SHIFT 0
#define LDST_LEN_MASK (0xff << LDST_LEN_SHIFT)

/* Special Length definitions when dst=deco-ctrl */
-#define LDLEN_ENABLE_OSL_COUNT (1 << 7)
-#define LDLEN_RST_CHA_OFIFO_PTR (1 << 6)
-#define LDLEN_RST_OFIFO (1 << 5)
-#define LDLEN_SET_OFIFO_OFF_VALID (1 << 4)
-#define LDLEN_SET_OFIFO_OFF_RSVD (1 << 3)
+#define LDLEN_ENABLE_OSL_COUNT BIT(7)
+#define LDLEN_RST_CHA_OFIFO_PTR BIT(6)
+#define LDLEN_RST_OFIFO BIT(5)
+#define LDLEN_SET_OFIFO_OFF_VALID BIT(4)
+#define LDLEN_SET_OFIFO_OFF_RSVD BIT(3)
#define LDLEN_SET_OFIFO_OFFSET_SHIFT 0
#define LDLEN_SET_OFIFO_OFFSET_MASK (3 << LDLEN_SET_OFIFO_OFFSET_SHIFT)

+/* CCB Clear Written Register bits */
+#define CLRW_CLR_C1MODE BIT(0)
+#define CLRW_CLR_C1DATAS BIT(2)
+#define CLRW_CLR_C1ICV BIT(3)
+#define CLRW_CLR_C1CTX BIT(5)
+#define CLRW_CLR_C1KEY BIT(6)
+#define CLRW_CLR_PK_A BIT(12)
+#define CLRW_CLR_PK_B BIT(13)
+#define CLRW_CLR_PK_N BIT(14)
+#define CLRW_CLR_PK_E BIT(15)
+#define CLRW_CLR_C2MODE BIT(16)
+#define CLRW_CLR_C2KEYS BIT(17)
+#define CLRW_CLR_C2DATAS BIT(18)
+#define CLRW_CLR_C2CTX BIT(21)
+#define CLRW_CLR_C2KEY BIT(22)
+#define CLRW_RESET_CLS2_DONE BIT(26) /* era 4 */
+#define CLRW_RESET_CLS1_DONE BIT(27) /* era 4 */
+#define CLRW_RESET_CLS2_CHA BIT(28) /* era 4 */
+#define CLRW_RESET_CLS1_CHA BIT(29) /* era 4 */
+#define CLRW_RESET_OFIFO BIT(30) /* era 3 */
+#define CLRW_RESET_IFIFO_DFIFO BIT(31) /* era 3 */
+
+/* CHA Control Register bits */
+#define CCTRL_RESET_CHA_ALL BIT(0)
+#define CCTRL_RESET_CHA_AESA BIT(1)
+#define CCTRL_RESET_CHA_DESA BIT(2)
+#define CCTRL_RESET_CHA_AFHA BIT(3)
+#define CCTRL_RESET_CHA_KFHA BIT(4)
+#define CCTRL_RESET_CHA_SF8A BIT(5)
+#define CCTRL_RESET_CHA_PKHA BIT(6)
+#define CCTRL_RESET_CHA_MDHA BIT(7)
+#define CCTRL_RESET_CHA_CRCA BIT(8)
+#define CCTRL_RESET_CHA_RNG BIT(9)
+#define CCTRL_RESET_CHA_SF9A BIT(10)
+#define CCTRL_RESET_CHA_ZUCE BIT(11)
+#define CCTRL_RESET_CHA_ZUCA BIT(12)
+#define CCTRL_UNLOAD_PK_A0 BIT(16)
+#define CCTRL_UNLOAD_PK_A1 BIT(17)
+#define CCTRL_UNLOAD_PK_A2 BIT(18)
+#define CCTRL_UNLOAD_PK_A3 BIT(19)
+#define CCTRL_UNLOAD_PK_B0 BIT(20)
+#define CCTRL_UNLOAD_PK_B1 BIT(21)
+#define CCTRL_UNLOAD_PK_B2 BIT(22)
+#define CCTRL_UNLOAD_PK_B3 BIT(23)
+#define CCTRL_UNLOAD_PK_N BIT(24)
+#define CCTRL_UNLOAD_PK_A BIT(26)
+#define CCTRL_UNLOAD_PK_B BIT(27)
+#define CCTRL_UNLOAD_SBOX BIT(28)
+
+/* IRQ Control Register (CxCIRQ) bits */
+#define CIRQ_ADI BIT(1)
+#define CIRQ_DDI BIT(2)
+#define CIRQ_RCDI BIT(3)
+#define CIRQ_KDI BIT(4)
+#define CIRQ_S8DI BIT(5)
+#define CIRQ_PDI BIT(6)
+#define CIRQ_MDI BIT(7)
+#define CIRQ_CDI BIT(8)
+#define CIRQ_RNDI BIT(9)
+#define CIRQ_S9DI BIT(10)
+#define CIRQ_ZEDI BIT(11) /* valid for Era 5 or higher */
+#define CIRQ_ZADI BIT(12) /* valid for Era 5 or higher */
+#define CIRQ_AEI BIT(17)
+#define CIRQ_DEI BIT(18)
+#define CIRQ_RCEI BIT(19)
+#define CIRQ_KEI BIT(20)
+#define CIRQ_S8EI BIT(21)
+#define CIRQ_PEI BIT(22)
+#define CIRQ_MEI BIT(23)
+#define CIRQ_CEI BIT(24)
+#define CIRQ_RNEI BIT(25)
+#define CIRQ_S9EI BIT(26)
+#define CIRQ_ZEEI BIT(27) /* valid for Era 5 or higher */
+#define CIRQ_ZAEI BIT(28) /* valid for Era 5 or higher */
+
/*
* FIFO_LOAD/FIFO_STORE/SEQ_FIFO_LOAD/SEQ_FIFO_STORE
* Command Constructs
@@ -291,6 +440,7 @@
#define FIFOST_CLASS_NORMAL (0x00 << FIFOST_CLASS_SHIFT)
#define FIFOST_CLASS_CLASS1KEY (0x01 << FIFOST_CLASS_SHIFT)
#define FIFOST_CLASS_CLASS2KEY (0x02 << FIFOST_CLASS_SHIFT)
+#define FIFOST_CLASS_BOTH (0x03 << FIFOST_CLASS_SHIFT)

/*
* Scatter-Gather Table/Variable Length Field
@@ -300,17 +450,27 @@
#define FIFOLDST_SGF_SHIFT 24
#define FIFOLDST_SGF_MASK (1 << FIFOLDST_SGF_SHIFT)
#define FIFOLDST_VLF_MASK (1 << FIFOLDST_SGF_SHIFT)
-#define FIFOLDST_SGF (1 << FIFOLDST_SGF_SHIFT)
-#define FIFOLDST_VLF (1 << FIFOLDST_SGF_SHIFT)
+#define FIFOLDST_SGF BIT(24)
+#define FIFOLDST_VLF BIT(24)

-/* Immediate - Data follows command in descriptor */
+/*
+ * Immediate - Data follows command in descriptor
+ * AIDF - Already in Input Data FIFO
+ */
#define FIFOLD_IMM_SHIFT 23
#define FIFOLD_IMM_MASK (1 << FIFOLD_IMM_SHIFT)
-#define FIFOLD_IMM (1 << FIFOLD_IMM_SHIFT)
+#define FIFOLD_AIDF_MASK (1 << FIFOLD_IMM_SHIFT)
+#define FIFOLD_IMM BIT(23)
+#define FIFOLD_AIDF BIT(23)
+
+#define FIFOST_IMM_SHIFT 23
+#define FIFOST_IMM_MASK (1 << FIFOST_IMM_SHIFT)
+#define FIFOST_IMM BIT(23)

/* Continue - Not the last FIFO store to come */
#define FIFOST_CONT_SHIFT 23
#define FIFOST_CONT_MASK (1 << FIFOST_CONT_SHIFT)
+#define FIFOST_CONT BIT(23)

/*
* Extended Length - use 32-bit extended length that
@@ -318,7 +478,7 @@
*/
#define FIFOLDST_EXT_SHIFT 22
#define FIFOLDST_EXT_MASK (1 << FIFOLDST_EXT_SHIFT)
-#define FIFOLDST_EXT (1 << FIFOLDST_EXT_SHIFT)
+#define FIFOLDST_EXT BIT(22)

/* Input data type.*/
#define FIFOLD_TYPE_SHIFT 16
@@ -360,7 +520,7 @@
#define FIFOLD_TYPE_LAST2FLUSH1 (0x05 << FIFOLD_TYPE_SHIFT)
#define FIFOLD_TYPE_LASTBOTH (0x06 << FIFOLD_TYPE_SHIFT)
#define FIFOLD_TYPE_LASTBOTHFL (0x07 << FIFOLD_TYPE_SHIFT)
-#define FIFOLD_TYPE_NOINFOFIFO (0x0F << FIFOLD_TYPE_SHIFT)
+#define FIFOLD_TYPE_NOINFOFIFO (0x0f << FIFOLD_TYPE_SHIFT)

#define FIFOLDST_LEN_MASK 0xffff
#define FIFOLDST_EXT_LEN_MASK 0xffffffff
@@ -393,6 +553,7 @@
#define FIFOST_TYPE_MESSAGE_DATA (0x30 << FIFOST_TYPE_SHIFT)
#define FIFOST_TYPE_RNGSTORE (0x34 << FIFOST_TYPE_SHIFT)
#define FIFOST_TYPE_RNGFIFO (0x35 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_METADATA (0x3e << FIFOST_TYPE_SHIFT)
#define FIFOST_TYPE_SKIP (0x3f << FIFOST_TYPE_SHIFT)

/*
@@ -412,7 +573,7 @@

/* ProtocolID selectors - PROTID */
#define OP_PCLID_SHIFT 16
-#define OP_PCLID_MASK (0xff << 16)
+#define OP_PCLID_MASK (0xff << OP_PCLID_SHIFT)

/* Assuming OP_TYPE = OP_TYPE_UNI_PROTOCOL */
#define OP_PCLID_IKEV1_PRF (0x01 << OP_PCLID_SHIFT)
@@ -420,13 +581,14 @@
#define OP_PCLID_SSL30_PRF (0x08 << OP_PCLID_SHIFT)
#define OP_PCLID_TLS10_PRF (0x09 << OP_PCLID_SHIFT)
#define OP_PCLID_TLS11_PRF (0x0a << OP_PCLID_SHIFT)
+#define OP_PCLID_TLS12_PRF (0x0b << OP_PCLID_SHIFT)
#define OP_PCLID_DTLS10_PRF (0x0c << OP_PCLID_SHIFT)
-#define OP_PCLID_PRF (0x06 << OP_PCLID_SHIFT)
-#define OP_PCLID_BLOB (0x0d << OP_PCLID_SHIFT)
-#define OP_PCLID_SECRETKEY (0x11 << OP_PCLID_SHIFT)
#define OP_PCLID_PUBLICKEYPAIR (0x14 << OP_PCLID_SHIFT)
#define OP_PCLID_DSASIGN (0x15 << OP_PCLID_SHIFT)
#define OP_PCLID_DSAVERIFY (0x16 << OP_PCLID_SHIFT)
+#define OP_PCLID_DIFFIEHELLMAN (0x17 << OP_PCLID_SHIFT)
+#define OP_PCLID_RSAENCRYPT (0x18 << OP_PCLID_SHIFT)
+#define OP_PCLID_RSADECRYPT (0x19 << OP_PCLID_SHIFT)

/* Assuming OP_TYPE = OP_TYPE_DECAP_PROTOCOL/ENCAP_PROTOCOL */
#define OP_PCLID_IPSEC (0x01 << OP_PCLID_SHIFT)
@@ -438,7 +600,15 @@
#define OP_PCLID_TLS10 (0x09 << OP_PCLID_SHIFT)
#define OP_PCLID_TLS11 (0x0a << OP_PCLID_SHIFT)
#define OP_PCLID_TLS12 (0x0b << OP_PCLID_SHIFT)
-#define OP_PCLID_DTLS (0x0c << OP_PCLID_SHIFT)
+#define OP_PCLID_DTLS10 (0x0c << OP_PCLID_SHIFT)
+#define OP_PCLID_BLOB (0x0d << OP_PCLID_SHIFT)
+#define OP_PCLID_IPSEC_NEW (0x11 << OP_PCLID_SHIFT)
+#define OP_PCLID_3G_DCRC (0x31 << OP_PCLID_SHIFT)
+#define OP_PCLID_3G_RLC_PDU (0x32 << OP_PCLID_SHIFT)
+#define OP_PCLID_3G_RLC_SDU (0x33 << OP_PCLID_SHIFT)
+#define OP_PCLID_LTE_PDCP_USER (0x42 << OP_PCLID_SHIFT)
+#define OP_PCLID_LTE_PDCP_CTRL (0x43 << OP_PCLID_SHIFT)
+#define OP_PCLID_LTE_PDCP_CTRL_MIXED (0x44 << OP_PCLID_SHIFT)

/*
* ProtocolInfo selectors
@@ -452,6 +622,7 @@
#define OP_PCL_IPSEC_DES_IV64 0x0100
#define OP_PCL_IPSEC_DES 0x0200
#define OP_PCL_IPSEC_3DES 0x0300
+#define OP_PCL_IPSEC_NULL 0x0B00
#define OP_PCL_IPSEC_AES_CBC 0x0c00
#define OP_PCL_IPSEC_AES_CTR 0x0d00
#define OP_PCL_IPSEC_AES_XTS 0x1600
@@ -461,6 +632,7 @@
#define OP_PCL_IPSEC_AES_GCM8 0x1200
#define OP_PCL_IPSEC_AES_GCM12 0x1300
#define OP_PCL_IPSEC_AES_GCM16 0x1400
+#define OP_PCL_IPSEC_AES_NULL_WITH_GMAC 0x1500

#define OP_PCL_IPSEC_HMAC_NULL 0x0000
#define OP_PCL_IPSEC_HMAC_MD5_96 0x0001
@@ -468,6 +640,7 @@
#define OP_PCL_IPSEC_AES_XCBC_MAC_96 0x0005
#define OP_PCL_IPSEC_HMAC_MD5_128 0x0006
#define OP_PCL_IPSEC_HMAC_SHA1_160 0x0007
+#define OP_PCL_IPSEC_AES_CMAC_96 0x0008
#define OP_PCL_IPSEC_HMAC_SHA2_256_128 0x000c
#define OP_PCL_IPSEC_HMAC_SHA2_384_192 0x000d
#define OP_PCL_IPSEC_HMAC_SHA2_512_256 0x000e
@@ -517,6 +690,32 @@
#define OP_PCL_SSL30_AES_256_CBC_SHA_16 0xc021
#define OP_PCL_SSL30_AES_256_CBC_SHA_17 0xc022

+#define OP_PCL_SSL30_AES_128_GCM_SHA256_1 0x009C
+#define OP_PCL_SSL30_AES_256_GCM_SHA384_1 0x009D
+#define OP_PCL_SSL30_AES_128_GCM_SHA256_2 0x009E
+#define OP_PCL_SSL30_AES_256_GCM_SHA384_2 0x009F
+#define OP_PCL_SSL30_AES_128_GCM_SHA256_3 0x00A0
+#define OP_PCL_SSL30_AES_256_GCM_SHA384_3 0x00A1
+#define OP_PCL_SSL30_AES_128_GCM_SHA256_4 0x00A2
+#define OP_PCL_SSL30_AES_256_GCM_SHA384_4 0x00A3
+#define OP_PCL_SSL30_AES_128_GCM_SHA256_5 0x00A4
+#define OP_PCL_SSL30_AES_256_GCM_SHA384_5 0x00A5
+#define OP_PCL_SSL30_AES_128_GCM_SHA256_6 0x00A6
+
+#define OP_PCL_TLS_DH_ANON_AES_256_GCM_SHA384 0x00A7
+#define OP_PCL_TLS_PSK_AES_128_GCM_SHA256 0x00A8
+#define OP_PCL_TLS_PSK_AES_256_GCM_SHA384 0x00A9
+#define OP_PCL_TLS_DHE_PSK_AES_128_GCM_SHA256 0x00AA
+#define OP_PCL_TLS_DHE_PSK_AES_256_GCM_SHA384 0x00AB
+#define OP_PCL_TLS_RSA_PSK_AES_128_GCM_SHA256 0x00AC
+#define OP_PCL_TLS_RSA_PSK_AES_256_GCM_SHA384 0x00AD
+#define OP_PCL_TLS_PSK_AES_128_CBC_SHA256 0x00AE
+#define OP_PCL_TLS_PSK_AES_256_CBC_SHA384 0x00AF
+#define OP_PCL_TLS_DHE_PSK_AES_128_CBC_SHA256 0x00B2
+#define OP_PCL_TLS_DHE_PSK_AES_256_CBC_SHA384 0x00B3
+#define OP_PCL_TLS_RSA_PSK_AES_128_CBC_SHA256 0x00B6
+#define OP_PCL_TLS_RSA_PSK_AES_256_CBC_SHA384 0x00B7
+
#define OP_PCL_SSL30_3DES_EDE_CBC_MD5 0x0023

#define OP_PCL_SSL30_3DES_EDE_CBC_SHA 0x001f
@@ -617,6 +816,29 @@
#define OP_PCL_TLS10_AES_256_CBC_SHA_16 0xc021
#define OP_PCL_TLS10_AES_256_CBC_SHA_17 0xc022

+#define OP_PCL_TLS_ECDHE_ECDSA_AES_128_CBC_SHA256 0xC023
+#define OP_PCL_TLS_ECDHE_ECDSA_AES_256_CBC_SHA384 0xC024
+#define OP_PCL_TLS_ECDH_ECDSA_AES_128_CBC_SHA256 0xC025
+#define OP_PCL_TLS_ECDH_ECDSA_AES_256_CBC_SHA384 0xC026
+#define OP_PCL_TLS_ECDHE_RSA_AES_128_CBC_SHA256 0xC027
+#define OP_PCL_TLS_ECDHE_RSA_AES_256_CBC_SHA384 0xC028
+#define OP_PCL_TLS_ECDH_RSA_AES_128_CBC_SHA256 0xC029
+#define OP_PCL_TLS_ECDH_RSA_AES_256_CBC_SHA384 0xC02A
+#define OP_PCL_TLS_ECDHE_ECDSA_AES_128_GCM_SHA256 0xC02B
+#define OP_PCL_TLS_ECDHE_ECDSA_AES_256_GCM_SHA384 0xC02C
+#define OP_PCL_TLS_ECDH_ECDSA_AES_128_GCM_SHA256 0xC02D
+#define OP_PCL_TLS_ECDH_ECDSA_AES_256_GCM_SHA384 0xC02E
+#define OP_PCL_TLS_ECDHE_RSA_AES_128_GCM_SHA256 0xC02F
+#define OP_PCL_TLS_ECDHE_RSA_AES_256_GCM_SHA384 0xC030
+#define OP_PCL_TLS_ECDH_RSA_AES_128_GCM_SHA256 0xC031
+#define OP_PCL_TLS_ECDH_RSA_AES_256_GCM_SHA384 0xC032
+#define OP_PCL_TLS_ECDHE_PSK_RC4_128_SHA 0xC033
+#define OP_PCL_TLS_ECDHE_PSK_3DES_EDE_CBC_SHA 0xC034
+#define OP_PCL_TLS_ECDHE_PSK_AES_128_CBC_SHA 0xC035
+#define OP_PCL_TLS_ECDHE_PSK_AES_256_CBC_SHA 0xC036
+#define OP_PCL_TLS_ECDHE_PSK_AES_128_CBC_SHA256 0xC037
+#define OP_PCL_TLS_ECDHE_PSK_AES_256_CBC_SHA384 0xC038
+
/* #define OP_PCL_TLS10_3DES_EDE_CBC_MD5 0x0023 */

#define OP_PCL_TLS10_3DES_EDE_CBC_SHA 0x001f
@@ -702,6 +924,13 @@
#define OP_PCL_TLS10_AES_256_CBC_SHA384 0xff63
#define OP_PCL_TLS10_AES_256_CBC_SHA512 0xff65

+#define OP_PCL_TLS_PVT_AES_192_CBC_SHA160 0xff90
+#define OP_PCL_TLS_PVT_AES_192_CBC_SHA384 0xff93
+#define OP_PCL_TLS_PVT_AES_192_CBC_SHA224 0xff94
+#define OP_PCL_TLS_PVT_AES_192_CBC_SHA512 0xff95
+#define OP_PCL_TLS_PVT_AES_192_CBC_SHA256 0xff96
+#define OP_PCL_TLS_PVT_MASTER_SECRET_PRF_FE 0xfffe
+#define OP_PCL_TLS_PVT_MASTER_SECRET_PRF_FF 0xffff


/* For TLS 1.1 - OP_PCLID_TLS11 */
@@ -1043,7 +1272,6 @@
#define OP_PCL_DTLS_DES_CBC_SHA_6 0x0015
#define OP_PCL_DTLS_DES_CBC_SHA_7 0x001a

-
#define OP_PCL_DTLS_3DES_EDE_CBC_MD5 0xff23
#define OP_PCL_DTLS_3DES_EDE_CBC_SHA160 0xff30
#define OP_PCL_DTLS_3DES_EDE_CBC_SHA224 0xff34
@@ -1076,17 +1304,126 @@
/* MacSec protinfos */
#define OP_PCL_MACSEC 0x0001

+/* 3G DCRC protinfos */
+#define OP_PCL_3G_DCRC_CRC7 0x0710
+#define OP_PCL_3G_DCRC_CRC11 0x0B10
+
+/* 3G RLC protinfos */
+#define OP_PCL_3G_RLC_NULL 0x0000
+#define OP_PCL_3G_RLC_KASUMI 0x0001
+#define OP_PCL_3G_RLC_SNOW 0x0002
+
+/* LTE protinfos */
+#define OP_PCL_LTE_NULL 0x0000
+#define OP_PCL_LTE_SNOW 0x0001
+#define OP_PCL_LTE_AES 0x0002
+#define OP_PCL_LTE_ZUC 0x0003
+
+/* LTE mixed protinfos */
+#define OP_PCL_LTE_MIXED_AUTH_SHIFT 0
+#define OP_PCL_LTE_MIXED_AUTH_MASK (3 << OP_PCL_LTE_MIXED_AUTH_SHIFT)
+#define OP_PCL_LTE_MIXED_ENC_SHIFT 8
+#define OP_PCL_LTE_MIXED_ENC_MASK (3 < OP_PCL_LTE_MIXED_ENC_SHIFT)
+#define OP_PCL_LTE_MIXED_AUTH_NULL (OP_PCL_LTE_NULL << \
+ OP_PCL_LTE_MIXED_AUTH_SHIFT)
+#define OP_PCL_LTE_MIXED_AUTH_SNOW (OP_PCL_LTE_SNOW << \
+ OP_PCL_LTE_MIXED_AUTH_SHIFT)
+#define OP_PCL_LTE_MIXED_AUTH_AES (OP_PCL_LTE_AES << \
+ OP_PCL_LTE_MIXED_AUTH_SHIFT)
+#define OP_PCL_LTE_MIXED_AUTH_ZUC (OP_PCL_LTE_ZUC << \
+ OP_PCL_LTE_MIXED_AUTH_SHIFT)
+#define OP_PCL_LTE_MIXED_ENC_NULL (OP_PCL_LTE_NULL << \
+ OP_PCL_LTE_MIXED_ENC_SHIFT)
+#define OP_PCL_LTE_MIXED_ENC_SNOW (OP_PCL_LTE_SNOW << \
+ OP_PCL_LTE_MIXED_ENC_SHIFT)
+#define OP_PCL_LTE_MIXED_ENC_AES (OP_PCL_LTE_AES << \
+ OP_PCL_LTE_MIXED_ENC_SHIFT)
+#define OP_PCL_LTE_MIXED_ENC_ZUC (OP_PCL_LTE_ZUC << \
+ OP_PCL_LTE_MIXED_ENC_SHIFT)
+
+/* PKI unidirectional protocol protinfo bits */
+#define OP_PCL_PKPROT_DSA_MSG BIT(10)
+#define OP_PCL_PKPROT_HASH_SHIFT 7
+#define OP_PCL_PKPROT_HASH_MASK (7 << OP_PCL_PKPROT_HASH_SHIFT)
+#define OP_PCL_PKPROT_HASH_MD5 (0 << OP_PCL_PKPROT_HASH_SHIFT)
+#define OP_PCL_PKPROT_HASH_SHA1 (1 << OP_PCL_PKPROT_HASH_SHIFT)
+#define OP_PCL_PKPROT_HASH_SHA224 (2 << OP_PCL_PKPROT_HASH_SHIFT)
+#define OP_PCL_PKPROT_HASH_SHA256 (3 << OP_PCL_PKPROT_HASH_SHIFT)
+#define OP_PCL_PKPROT_HASH_SHA384 (4 << OP_PCL_PKPROT_HASH_SHIFT)
+#define OP_PCL_PKPROT_HASH_SHA512 (5 << OP_PCL_PKPROT_HASH_SHIFT)
+#define OP_PCL_PKPROT_EKT_Z BIT(6)
+#define OP_PCL_PKPROT_DECRYPT_Z BIT(5)
+#define OP_PCL_PKPROT_EKT_PRI BIT(4)
+#define OP_PCL_PKPROT_TEST BIT(3)
+#define OP_PCL_PKPROT_DECRYPT_PRI BIT(2)
+#define OP_PCL_PKPROT_ECC BIT(1)
+#define OP_PCL_PKPROT_F2M BIT(0)
+
+/* Blob protinfos */
+#define OP_PCL_BLOB_TKEK_SHIFT 9
+#define OP_PCL_BLOB_TKEK BIT(9)
+#define OP_PCL_BLOB_EKT_SHIFT 8
+#define OP_PCL_BLOB_EKT BIT(8)
+#define OP_PCL_BLOB_REG_SHIFT 4
+#define OP_PCL_BLOB_REG_MASK (0xF << OP_PCL_BLOB_REG_SHIFT)
+#define OP_PCL_BLOB_REG_MEMORY (0x0 << OP_PCL_BLOB_REG_SHIFT)
+#define OP_PCL_BLOB_REG_KEY1 (0x1 << OP_PCL_BLOB_REG_SHIFT)
+#define OP_PCL_BLOB_REG_KEY2 (0x3 << OP_PCL_BLOB_REG_SHIFT)
+#define OP_PCL_BLOB_AFHA_SBOX (0x5 << OP_PCL_BLOB_REG_SHIFT)
+#define OP_PCL_BLOB_REG_SPLIT (0x7 << OP_PCL_BLOB_REG_SHIFT)
+#define OP_PCL_BLOB_REG_PKE (0x9 << OP_PCL_BLOB_REG_SHIFT)
+#define OP_PCL_BLOB_SEC_MEM_SHIFT 3
+#define OP_PCL_BLOB_SEC_MEM BIT(3)
+#define OP_PCL_BLOB_BLACK BIT(2)
+#define OP_PCL_BLOB_FORMAT_SHIFT 0
+#define OP_PCL_BLOB_FORMAT_MASK 0x3
+#define OP_PCL_BLOB_FORMAT_NORMAL 0
+#define OP_PCL_BLOB_FORMAT_MASTER_VER 2
+#define OP_PCL_BLOB_FORMAT_TEST 3
+
+/* IKE / IKEv2 protinfos */
+#define OP_PCL_IKE_HMAC_MD5 0x0100
+#define OP_PCL_IKE_HMAC_SHA1 0x0200
+#define OP_PCL_IKE_HMAC_AES128_CBC 0x0400
+#define OP_PCL_IKE_HMAC_SHA256 0x0500
+#define OP_PCL_IKE_HMAC_SHA384 0x0600
+#define OP_PCL_IKE_HMAC_SHA512 0x0700
+#define OP_PCL_IKE_HMAC_AES128_CMAC 0x0800
+
/* PKI unidirectional protocol protinfo bits */
-#define OP_PCL_PKPROT_TEST 0x0008
-#define OP_PCL_PKPROT_DECRYPT 0x0004
-#define OP_PCL_PKPROT_ECC 0x0002
-#define OP_PCL_PKPROT_F2M 0x0001
+#define OP_PCL_PKPROT_TEST BIT(3)
+#define OP_PCL_PKPROT_DECRYPT BIT(2)
+#define OP_PCL_PKPROT_ECC BIT(1)
+#define OP_PCL_PKPROT_F2M BIT(0)
+
+/* RSA Protinfo */
+#define OP_PCL_RSAPROT_OP_MASK 3
+#define OP_PCL_RSAPROT_OP_ENC_F_IN 0
+#define OP_PCL_RSAPROT_OP_ENC_F_OUT 1
+#define OP_PCL_RSAPROT_OP_DEC_ND 0
+#define OP_PCL_RSAPROT_OP_DEC_PQD 1
+#define OP_PCL_RSAPROT_OP_DEC_PQDPDQC 2
+#define OP_PCL_RSAPROT_FFF_SHIFT 4
+#define OP_PCL_RSAPROT_FFF_MASK (7 << OP_PCL_RSAPROT_FFF_SHIFT)
+#define OP_PCL_RSAPROT_FFF_RED (0 << OP_PCL_RSAPROT_FFF_SHIFT)
+#define OP_PCL_RSAPROT_FFF_ENC (1 << OP_PCL_RSAPROT_FFF_SHIFT)
+#define OP_PCL_RSAPROT_FFF_TK_ENC (5 << OP_PCL_RSAPROT_FFF_SHIFT)
+#define OP_PCL_RSAPROT_FFF_EKT (3 << OP_PCL_RSAPROT_FFF_SHIFT)
+#define OP_PCL_RSAPROT_FFF_TK_EKT (7 << OP_PCL_RSAPROT_FFF_SHIFT)
+#define OP_PCL_RSAPROT_PPP_SHIFT 8
+#define OP_PCL_RSAPROT_PPP_MASK (7 << OP_PCL_RSAPROT_PPP_SHIFT)
+#define OP_PCL_RSAPROT_PPP_RED (0 << OP_PCL_RSAPROT_PPP_SHIFT)
+#define OP_PCL_RSAPROT_PPP_ENC (1 << OP_PCL_RSAPROT_PPP_SHIFT)
+#define OP_PCL_RSAPROT_PPP_TK_ENC (5 << OP_PCL_RSAPROT_PPP_SHIFT)
+#define OP_PCL_RSAPROT_PPP_EKT (3 << OP_PCL_RSAPROT_PPP_SHIFT)
+#define OP_PCL_RSAPROT_PPP_TK_EKT (7 << OP_PCL_RSAPROT_PPP_SHIFT)
+#define OP_PCL_RSAPROT_FMT_PKCSV15 BIT(12)

/* For non-protocol/alg-only op commands */
#define OP_ALG_TYPE_SHIFT 24
#define OP_ALG_TYPE_MASK (0x7 << OP_ALG_TYPE_SHIFT)
-#define OP_ALG_TYPE_CLASS1 (2 << OP_ALG_TYPE_SHIFT)
-#define OP_ALG_TYPE_CLASS2 (4 << OP_ALG_TYPE_SHIFT)
+#define OP_ALG_TYPE_CLASS1 (0x2 << OP_ALG_TYPE_SHIFT)
+#define OP_ALG_TYPE_CLASS2 (0x4 << OP_ALG_TYPE_SHIFT)

#define OP_ALG_ALGSEL_SHIFT 16
#define OP_ALG_ALGSEL_MASK (0xff << OP_ALG_ALGSEL_SHIFT)
@@ -1102,16 +1439,19 @@
#define OP_ALG_ALGSEL_SHA384 (0x44 << OP_ALG_ALGSEL_SHIFT)
#define OP_ALG_ALGSEL_SHA512 (0x45 << OP_ALG_ALGSEL_SHIFT)
#define OP_ALG_ALGSEL_RNG (0x50 << OP_ALG_ALGSEL_SHIFT)
-#define OP_ALG_ALGSEL_SNOW (0x60 << OP_ALG_ALGSEL_SHIFT)
#define OP_ALG_ALGSEL_SNOW_F8 (0x60 << OP_ALG_ALGSEL_SHIFT)
#define OP_ALG_ALGSEL_KASUMI (0x70 << OP_ALG_ALGSEL_SHIFT)
#define OP_ALG_ALGSEL_CRC (0x90 << OP_ALG_ALGSEL_SHIFT)
#define OP_ALG_ALGSEL_SNOW_F9 (0xA0 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_ZUCE (0xB0 << OP_ALG_ALGSEL_SHIFT)
+#define OP_ALG_ALGSEL_ZUCA (0xC0 << OP_ALG_ALGSEL_SHIFT)

#define OP_ALG_AAI_SHIFT 4
-#define OP_ALG_AAI_MASK (0x1ff << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_MASK (0x3ff << OP_ALG_AAI_SHIFT)

-/* blockcipher AAI set */
+/* block cipher AAI set */
+#define OP_ALG_AESA_MODE_MASK (0xF0 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR (0x00 << OP_ALG_AAI_SHIFT)
#define OP_ALG_AAI_CTR_MOD128 (0x00 << OP_ALG_AAI_SHIFT)
#define OP_ALG_AAI_CTR_MOD8 (0x01 << OP_ALG_AAI_SHIFT)
#define OP_ALG_AAI_CTR_MOD16 (0x02 << OP_ALG_AAI_SHIFT)
@@ -1139,17 +1479,24 @@
#define OP_ALG_AAI_GCM (0x90 << OP_ALG_AAI_SHIFT)
#define OP_ALG_AAI_CBC_XCBCMAC (0xa0 << OP_ALG_AAI_SHIFT)
#define OP_ALG_AAI_CTR_XCBCMAC (0xb0 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CBC_CMAC (0xc0 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_CMAC_LTE (0xd0 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_CTR_CMAC (0xe0 << OP_ALG_AAI_SHIFT)
#define OP_ALG_AAI_CHECKODD (0x80 << OP_ALG_AAI_SHIFT)
#define OP_ALG_AAI_DK (0x100 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_C2K (0x200 << OP_ALG_AAI_SHIFT)

/* randomizer AAI set */
+#define OP_ALG_RNG_MODE_MASK (0x30 << OP_ALG_AAI_SHIFT)
#define OP_ALG_AAI_RNG (0x00 << OP_ALG_AAI_SHIFT)
#define OP_ALG_AAI_RNG_NZB (0x10 << OP_ALG_AAI_SHIFT)
#define OP_ALG_AAI_RNG_OBP (0x20 << OP_ALG_AAI_SHIFT)

/* RNG4 AAI set */
-#define OP_ALG_AAI_RNG4_SH_0 (0x00 << OP_ALG_AAI_SHIFT)
-#define OP_ALG_AAI_RNG4_SH_1 (0x01 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_RNG4_SH_SHIFT OP_ALG_AAI_SHIFT
+#define OP_ALG_AAI_RNG4_SH_MASK (0x03 << OP_ALG_AAI_RNG4_SH_SHIFT)
+#define OP_ALG_AAI_RNG4_SH_0 (0x00 << OP_ALG_AAI_RNG4_SH_SHIFT)
+#define OP_ALG_AAI_RNG4_SH_1 (0x01 << OP_ALG_AAI_RNG4_SH_SHIFT)
#define OP_ALG_AAI_RNG4_PS (0x40 << OP_ALG_AAI_SHIFT)
#define OP_ALG_AAI_RNG4_AI (0x80 << OP_ALG_AAI_SHIFT)
#define OP_ALG_AAI_RNG4_SK (0x100 << OP_ALG_AAI_SHIFT)
@@ -1161,14 +1508,16 @@
#define OP_ALG_AAI_HMAC_PRECOMP (0x04 << OP_ALG_AAI_SHIFT)

/* CRC AAI set*/
+#define OP_ALG_CRC_POLY_MASK (0x07 << OP_ALG_AAI_SHIFT)
#define OP_ALG_AAI_802 (0x01 << OP_ALG_AAI_SHIFT)
#define OP_ALG_AAI_3385 (0x02 << OP_ALG_AAI_SHIFT)
#define OP_ALG_AAI_CUST_POLY (0x04 << OP_ALG_AAI_SHIFT)
#define OP_ALG_AAI_DIS (0x10 << OP_ALG_AAI_SHIFT)
#define OP_ALG_AAI_DOS (0x20 << OP_ALG_AAI_SHIFT)
#define OP_ALG_AAI_DOC (0x40 << OP_ALG_AAI_SHIFT)
+#define OP_ALG_AAI_IVZ (0x80 << OP_ALG_AAI_SHIFT)

-/* Kasumi/SNOW AAI set */
+/* Kasumi/SNOW/ZUC AAI set */
#define OP_ALG_AAI_F8 (0xc0 << OP_ALG_AAI_SHIFT)
#define OP_ALG_AAI_F9 (0xc8 << OP_ALG_AAI_SHIFT)
#define OP_ALG_AAI_GSM (0x10 << OP_ALG_AAI_SHIFT)
@@ -1183,123 +1532,646 @@

#define OP_ALG_ICV_SHIFT 1
#define OP_ALG_ICV_MASK (1 << OP_ALG_ICV_SHIFT)
-#define OP_ALG_ICV_OFF (0 << OP_ALG_ICV_SHIFT)
-#define OP_ALG_ICV_ON (1 << OP_ALG_ICV_SHIFT)
+#define OP_ALG_ICV_OFF 0
+#define OP_ALG_ICV_ON BIT(1)

#define OP_ALG_DIR_SHIFT 0
#define OP_ALG_DIR_MASK 1
#define OP_ALG_DECRYPT 0
-#define OP_ALG_ENCRYPT 1
+#define OP_ALG_ENCRYPT BIT(0)

/* PKHA algorithm type set */
-#define OP_ALG_PK 0x00800000
-#define OP_ALG_PK_FUN_MASK 0x3f /* clrmem, modmath, or cpymem */
+#define OP_ALG_PK 0x00800000
+#define OP_ALG_PK_FUN_MASK 0x3f /* clrmem, modmath, or cpymem */

/* PKHA mode clear memory functions */
-#define OP_ALG_PKMODE_A_RAM 0x80000
-#define OP_ALG_PKMODE_B_RAM 0x40000
-#define OP_ALG_PKMODE_E_RAM 0x20000
-#define OP_ALG_PKMODE_N_RAM 0x10000
-#define OP_ALG_PKMODE_CLEARMEM 0x00001
+#define OP_ALG_PKMODE_A_RAM BIT(19)
+#define OP_ALG_PKMODE_B_RAM BIT(18)
+#define OP_ALG_PKMODE_E_RAM BIT(17)
+#define OP_ALG_PKMODE_N_RAM BIT(16)
+#define OP_ALG_PKMODE_CLEARMEM BIT(0)
+
+/* PKHA mode clear memory functions */
+#define OP_ALG_PKMODE_CLEARMEM_ALL (OP_ALG_PKMODE_CLEARMEM | \
+ OP_ALG_PKMODE_A_RAM | \
+ OP_ALG_PKMODE_B_RAM | \
+ OP_ALG_PKMODE_N_RAM | \
+ OP_ALG_PKMODE_E_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_ABE (OP_ALG_PKMODE_CLEARMEM | \
+ OP_ALG_PKMODE_A_RAM | \
+ OP_ALG_PKMODE_B_RAM | \
+ OP_ALG_PKMODE_E_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_ABN (OP_ALG_PKMODE_CLEARMEM | \
+ OP_ALG_PKMODE_A_RAM | \
+ OP_ALG_PKMODE_B_RAM | \
+ OP_ALG_PKMODE_N_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_AB (OP_ALG_PKMODE_CLEARMEM | \
+ OP_ALG_PKMODE_A_RAM | \
+ OP_ALG_PKMODE_B_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_AEN (OP_ALG_PKMODE_CLEARMEM | \
+ OP_ALG_PKMODE_A_RAM | \
+ OP_ALG_PKMODE_E_RAM | \
+ OP_ALG_PKMODE_N_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_AE (OP_ALG_PKMODE_CLEARMEM | \
+ OP_ALG_PKMODE_A_RAM | \
+ OP_ALG_PKMODE_E_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_AN (OP_ALG_PKMODE_CLEARMEM | \
+ OP_ALG_PKMODE_A_RAM | \
+ OP_ALG_PKMODE_N_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_A (OP_ALG_PKMODE_CLEARMEM | \
+ OP_ALG_PKMODE_A_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_BEN (OP_ALG_PKMODE_CLEARMEM | \
+ OP_ALG_PKMODE_B_RAM | \
+ OP_ALG_PKMODE_E_RAM | \
+ OP_ALG_PKMODE_N_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_BE (OP_ALG_PKMODE_CLEARMEM | \
+ OP_ALG_PKMODE_B_RAM | \
+ OP_ALG_PKMODE_E_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_BN (OP_ALG_PKMODE_CLEARMEM | \
+ OP_ALG_PKMODE_B_RAM | \
+ OP_ALG_PKMODE_N_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_B (OP_ALG_PKMODE_CLEARMEM | \
+ OP_ALG_PKMODE_B_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_EN (OP_ALG_PKMODE_CLEARMEM | \
+ OP_ALG_PKMODE_E_RAM | \
+ OP_ALG_PKMODE_N_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_E (OP_ALG_PKMODE_CLEARMEM | \
+ OP_ALG_PKMODE_E_RAM)
+#define OP_ALG_PKMODE_CLEARMEM_N (OP_ALG_PKMODE_CLEARMEM | \
+ OP_ALG_PKMODE_N_RAM)

/* PKHA mode modular-arithmetic functions */
-#define OP_ALG_PKMODE_MOD_IN_MONTY 0x80000
-#define OP_ALG_PKMODE_MOD_OUT_MONTY 0x40000
-#define OP_ALG_PKMODE_MOD_F2M 0x20000
-#define OP_ALG_PKMODE_MOD_R2_IN 0x10000
-#define OP_ALG_PKMODE_PRJECTV 0x00800
-#define OP_ALG_PKMODE_TIME_EQ 0x400
-#define OP_ALG_PKMODE_OUT_B 0x000
-#define OP_ALG_PKMODE_OUT_A 0x100
-#define OP_ALG_PKMODE_MOD_ADD 0x002
-#define OP_ALG_PKMODE_MOD_SUB_AB 0x003
-#define OP_ALG_PKMODE_MOD_SUB_BA 0x004
-#define OP_ALG_PKMODE_MOD_MULT 0x005
-#define OP_ALG_PKMODE_MOD_EXPO 0x006
-#define OP_ALG_PKMODE_MOD_REDUCT 0x007
-#define OP_ALG_PKMODE_MOD_INV 0x008
-#define OP_ALG_PKMODE_MOD_ECC_ADD 0x009
-#define OP_ALG_PKMODE_MOD_ECC_DBL 0x00a
-#define OP_ALG_PKMODE_MOD_ECC_MULT 0x00b
-#define OP_ALG_PKMODE_MOD_MONT_CNST 0x00c
-#define OP_ALG_PKMODE_MOD_CRT_CNST 0x00d
-#define OP_ALG_PKMODE_MOD_GCD 0x00e
-#define OP_ALG_PKMODE_MOD_PRIMALITY 0x00f
+#define OP_ALG_PKMODE_MOD_IN_MONTY BIT(19)
+#define OP_ALG_PKMODE_MOD_OUT_MONTY BIT(18)
+#define OP_ALG_PKMODE_MOD_F2M BIT(17)
+#define OP_ALG_PKMODE_MOD_R2_IN BIT(16)
+#define OP_ALG_PKMODE_PRJECTV BIT(11)
+#define OP_ALG_PKMODE_TIME_EQ BIT(10)
+
+#define OP_ALG_PKMODE_OUT_B 0x000
+#define OP_ALG_PKMODE_OUT_A 0x100
+
+/*
+ * PKHA mode modular-arithmetic integer functions
+ * Can be ORed with OP_ALG_PKMODE_OUT_A to change destination from B
+ */
+#define OP_ALG_PKMODE_MOD_ADD 0x002
+#define OP_ALG_PKMODE_MOD_SUB_AB 0x003
+#define OP_ALG_PKMODE_MOD_SUB_BA 0x004
+#define OP_ALG_PKMODE_MOD_MULT 0x005
+#define OP_ALG_PKMODE_MOD_MULT_IM (0x005 | OP_ALG_PKMODE_MOD_IN_MONTY)
+#define OP_ALG_PKMODE_MOD_MULT_IM_OM (0x005 | OP_ALG_PKMODE_MOD_IN_MONTY \
+ | OP_ALG_PKMODE_MOD_OUT_MONTY)
+#define OP_ALG_PKMODE_MOD_EXPO 0x006
+#define OP_ALG_PKMODE_MOD_EXPO_TEQ (0x006 | OP_ALG_PKMODE_TIME_EQ)
+#define OP_ALG_PKMODE_MOD_EXPO_IM (0x006 | OP_ALG_PKMODE_MOD_IN_MONTY)
+#define OP_ALG_PKMODE_MOD_EXPO_IM_TEQ (0x006 | OP_ALG_PKMODE_MOD_IN_MONTY \
+ | OP_ALG_PKMODE_TIME_EQ)
+#define OP_ALG_PKMODE_MOD_REDUCT 0x007
+#define OP_ALG_PKMODE_MOD_INV 0x008
+#define OP_ALG_PKMODE_MOD_ECC_ADD 0x009
+#define OP_ALG_PKMODE_MOD_ECC_DBL 0x00a
+#define OP_ALG_PKMODE_MOD_ECC_MULT 0x00b
+#define OP_ALG_PKMODE_MOD_MONT_CNST 0x00c
+#define OP_ALG_PKMODE_MOD_CRT_CNST 0x00d
+#define OP_ALG_PKMODE_MOD_GCD 0x00e
+#define OP_ALG_PKMODE_MOD_PRIMALITY 0x00f
+#define OP_ALG_PKMODE_MOD_SML_EXP 0x016
+
+/*
+ * PKHA mode modular-arithmetic F2m functions
+ * Can be ORed with OP_ALG_PKMODE_OUT_A to change destination from B
+ */
+#define OP_ALG_PKMODE_F2M_ADD (0x002 | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_F2M_MUL (0x005 | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_F2M_MUL_IM (0x005 | OP_ALG_PKMODE_MOD_F2M \
+ | OP_ALG_PKMODE_MOD_IN_MONTY)
+#define OP_ALG_PKMODE_F2M_MUL_IM_OM (0x005 | OP_ALG_PKMODE_MOD_F2M \
+ | OP_ALG_PKMODE_MOD_IN_MONTY \
+ | OP_ALG_PKMODE_MOD_OUT_MONTY)
+#define OP_ALG_PKMODE_F2M_EXP (0x006 | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_F2M_EXP_TEQ (0x006 | OP_ALG_PKMODE_MOD_F2M \
+ | OP_ALG_PKMODE_TIME_EQ)
+#define OP_ALG_PKMODE_F2M_AMODN (0x007 | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_F2M_INV (0x008 | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_F2M_R2 (0x00c | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_F2M_GCD (0x00e | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_F2M_SML_EXP (0x016 | OP_ALG_PKMODE_MOD_F2M)
+
+/*
+ * PKHA mode ECC Integer arithmetic functions
+ * Can be ORed with OP_ALG_PKMODE_OUT_A to change destination from B
+ */
+#define OP_ALG_PKMODE_ECC_MOD_ADD 0x009
+#define OP_ALG_PKMODE_ECC_MOD_ADD_IM_OM_PROJ \
+ (0x009 | OP_ALG_PKMODE_MOD_IN_MONTY \
+ | OP_ALG_PKMODE_MOD_OUT_MONTY \
+ | OP_ALG_PKMODE_PRJECTV)
+#define OP_ALG_PKMODE_ECC_MOD_DBL 0x00a
+#define OP_ALG_PKMODE_ECC_MOD_DBL_IM_OM_PROJ \
+ (0x00a | OP_ALG_PKMODE_MOD_IN_MONTY \
+ | OP_ALG_PKMODE_MOD_OUT_MONTY \
+ | OP_ALG_PKMODE_PRJECTV)
+#define OP_ALG_PKMODE_ECC_MOD_MUL 0x00b
+#define OP_ALG_PKMODE_ECC_MOD_MUL_TEQ (0x00b | OP_ALG_PKMODE_TIME_EQ)
+#define OP_ALG_PKMODE_ECC_MOD_MUL_R2 (0x00b | OP_ALG_PKMODE_MOD_R2_IN)
+#define OP_ALG_PKMODE_ECC_MOD_MUL_R2_TEQ \
+ (0x00b | OP_ALG_PKMODE_MOD_R2_IN \
+ | OP_ALG_PKMODE_TIME_EQ)
+#define OP_ALG_PKMODE_ECC_MOD_MUL_R2_PROJ \
+ (0x00b | OP_ALG_PKMODE_MOD_R2_IN \
+ | OP_ALG_PKMODE_PRJECTV)
+#define OP_ALG_PKMODE_ECC_MOD_MUL_R2_PROJ_TEQ \
+ (0x00b | OP_ALG_PKMODE_MOD_R2_IN \
+ | OP_ALG_PKMODE_PRJECTV \
+ | OP_ALG_PKMODE_TIME_EQ)
+
+/*
+ * PKHA mode ECC F2m arithmetic functions
+ * Can be ORed with OP_ALG_PKMODE_OUT_A to change destination from B
+ */
+#define OP_ALG_PKMODE_ECC_F2M_ADD (0x009 | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_ECC_F2M_ADD_IM_OM_PROJ \
+ (0x009 | OP_ALG_PKMODE_MOD_F2M \
+ | OP_ALG_PKMODE_MOD_IN_MONTY \
+ | OP_ALG_PKMODE_MOD_OUT_MONTY \
+ | OP_ALG_PKMODE_PRJECTV)
+#define OP_ALG_PKMODE_ECC_F2M_DBL (0x00a | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_ECC_F2M_DBL_IM_OM_PROJ \
+ (0x00a | OP_ALG_PKMODE_MOD_F2M \
+ | OP_ALG_PKMODE_MOD_IN_MONTY \
+ | OP_ALG_PKMODE_MOD_OUT_MONTY \
+ | OP_ALG_PKMODE_PRJECTV)
+#define OP_ALG_PKMODE_ECC_F2M_MUL (0x00b | OP_ALG_PKMODE_MOD_F2M)
+#define OP_ALG_PKMODE_ECC_F2M_MUL_TEQ \
+ (0x00b | OP_ALG_PKMODE_MOD_F2M \
+ | OP_ALG_PKMODE_TIME_EQ)
+#define OP_ALG_PKMODE_ECC_F2M_MUL_R2 \
+ (0x00b | OP_ALG_PKMODE_MOD_F2M \
+ | OP_ALG_PKMODE_MOD_R2_IN)
+#define OP_ALG_PKMODE_ECC_F2M_MUL_R2_TEQ \
+ (0x00b | OP_ALG_PKMODE_MOD_F2M \
+ | OP_ALG_PKMODE_MOD_R2_IN \
+ | OP_ALG_PKMODE_TIME_EQ)
+#define OP_ALG_PKMODE_ECC_F2M_MUL_R2_PROJ \
+ (0x00b | OP_ALG_PKMODE_MOD_F2M \
+ | OP_ALG_PKMODE_MOD_R2_IN \
+ | OP_ALG_PKMODE_PRJECTV)
+#define OP_ALG_PKMODE_ECC_F2M_MUL_R2_PROJ_TEQ \
+ (0x00b | OP_ALG_PKMODE_MOD_F2M \
+ | OP_ALG_PKMODE_MOD_R2_IN \
+ | OP_ALG_PKMODE_PRJECTV \
+ | OP_ALG_PKMODE_TIME_EQ)

/* PKHA mode copy-memory functions */
-#define OP_ALG_PKMODE_SRC_REG_SHIFT 17
-#define OP_ALG_PKMODE_SRC_REG_MASK (7 << OP_ALG_PKMODE_SRC_REG_SHIFT)
-#define OP_ALG_PKMODE_DST_REG_SHIFT 10
-#define OP_ALG_PKMODE_DST_REG_MASK (7 << OP_ALG_PKMODE_DST_REG_SHIFT)
-#define OP_ALG_PKMODE_SRC_SEG_SHIFT 8
-#define OP_ALG_PKMODE_SRC_SEG_MASK (3 << OP_ALG_PKMODE_SRC_SEG_SHIFT)
-#define OP_ALG_PKMODE_DST_SEG_SHIFT 6
-#define OP_ALG_PKMODE_DST_SEG_MASK (3 << OP_ALG_PKMODE_DST_SEG_SHIFT)
-
-#define OP_ALG_PKMODE_SRC_REG_A (0 << OP_ALG_PKMODE_SRC_REG_SHIFT)
-#define OP_ALG_PKMODE_SRC_REG_B (1 << OP_ALG_PKMODE_SRC_REG_SHIFT)
-#define OP_ALG_PKMODE_SRC_REG_N (3 << OP_ALG_PKMODE_SRC_REG_SHIFT)
-#define OP_ALG_PKMODE_DST_REG_A (0 << OP_ALG_PKMODE_DST_REG_SHIFT)
-#define OP_ALG_PKMODE_DST_REG_B (1 << OP_ALG_PKMODE_DST_REG_SHIFT)
-#define OP_ALG_PKMODE_DST_REG_E (2 << OP_ALG_PKMODE_DST_REG_SHIFT)
-#define OP_ALG_PKMODE_DST_REG_N (3 << OP_ALG_PKMODE_DST_REG_SHIFT)
-#define OP_ALG_PKMODE_SRC_SEG_0 (0 << OP_ALG_PKMODE_SRC_SEG_SHIFT)
-#define OP_ALG_PKMODE_SRC_SEG_1 (1 << OP_ALG_PKMODE_SRC_SEG_SHIFT)
-#define OP_ALG_PKMODE_SRC_SEG_2 (2 << OP_ALG_PKMODE_SRC_SEG_SHIFT)
-#define OP_ALG_PKMODE_SRC_SEG_3 (3 << OP_ALG_PKMODE_SRC_SEG_SHIFT)
-#define OP_ALG_PKMODE_DST_SEG_0 (0 << OP_ALG_PKMODE_DST_SEG_SHIFT)
-#define OP_ALG_PKMODE_DST_SEG_1 (1 << OP_ALG_PKMODE_DST_SEG_SHIFT)
-#define OP_ALG_PKMODE_DST_SEG_2 (2 << OP_ALG_PKMODE_DST_SEG_SHIFT)
-#define OP_ALG_PKMODE_DST_SEG_3 (3 << OP_ALG_PKMODE_DST_SEG_SHIFT)
-#define OP_ALG_PKMODE_CPYMEM_N_SZ 0x80
-#define OP_ALG_PKMODE_CPYMEM_SRC_SZ 0x81
+#define OP_ALG_PKMODE_SRC_REG_SHIFT 17
+#define OP_ALG_PKMODE_SRC_REG_MASK (7 << OP_ALG_PKMODE_SRC_REG_SHIFT)
+#define OP_ALG_PKMODE_DST_REG_SHIFT 10
+#define OP_ALG_PKMODE_DST_REG_MASK (7 << OP_ALG_PKMODE_DST_REG_SHIFT)
+#define OP_ALG_PKMODE_SRC_SEG_SHIFT 8
+#define OP_ALG_PKMODE_SRC_SEG_MASK (3 << OP_ALG_PKMODE_SRC_SEG_SHIFT)
+#define OP_ALG_PKMODE_DST_SEG_SHIFT 6
+#define OP_ALG_PKMODE_DST_SEG_MASK (3 << OP_ALG_PKMODE_DST_SEG_SHIFT)
+
+#define OP_ALG_PKMODE_SRC_REG_A (0 << OP_ALG_PKMODE_SRC_REG_SHIFT)
+#define OP_ALG_PKMODE_SRC_REG_B (1 << OP_ALG_PKMODE_SRC_REG_SHIFT)
+#define OP_ALG_PKMODE_SRC_REG_N (3 << OP_ALG_PKMODE_SRC_REG_SHIFT)
+#define OP_ALG_PKMODE_DST_REG_A (0 << OP_ALG_PKMODE_DST_REG_SHIFT)
+#define OP_ALG_PKMODE_DST_REG_B (1 << OP_ALG_PKMODE_DST_REG_SHIFT)
+#define OP_ALG_PKMODE_DST_REG_E (2 << OP_ALG_PKMODE_DST_REG_SHIFT)
+#define OP_ALG_PKMODE_DST_REG_N (3 << OP_ALG_PKMODE_DST_REG_SHIFT)
+#define OP_ALG_PKMODE_SRC_SEG_0 (0 << OP_ALG_PKMODE_SRC_SEG_SHIFT)
+#define OP_ALG_PKMODE_SRC_SEG_1 (1 << OP_ALG_PKMODE_SRC_SEG_SHIFT)
+#define OP_ALG_PKMODE_SRC_SEG_2 (2 << OP_ALG_PKMODE_SRC_SEG_SHIFT)
+#define OP_ALG_PKMODE_SRC_SEG_3 (3 << OP_ALG_PKMODE_SRC_SEG_SHIFT)
+#define OP_ALG_PKMODE_DST_SEG_0 (0 << OP_ALG_PKMODE_DST_SEG_SHIFT)
+#define OP_ALG_PKMODE_DST_SEG_1 (1 << OP_ALG_PKMODE_DST_SEG_SHIFT)
+#define OP_ALG_PKMODE_DST_SEG_2 (2 << OP_ALG_PKMODE_DST_SEG_SHIFT)
+#define OP_ALG_PKMODE_DST_SEG_3 (3 << OP_ALG_PKMODE_DST_SEG_SHIFT)
+
+/* PKHA mode copy-memory functions - amount based on N SIZE */
+#define OP_ALG_PKMODE_COPY_NSZ 0x10
+#define OP_ALG_PKMODE_COPY_NSZ_A0_B0 (OP_ALG_PKMODE_COPY_NSZ | \
+ OP_ALG_PKMODE_SRC_REG_A | \
+ OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_NSZ_A0_B1 (OP_ALG_PKMODE_COPY_NSZ | \
+ OP_ALG_PKMODE_SRC_REG_A | \
+ OP_ALG_PKMODE_DST_REG_B | \
+ OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_NSZ_A0_B2 (OP_ALG_PKMODE_COPY_NSZ | \
+ OP_ALG_PKMODE_SRC_REG_A | \
+ OP_ALG_PKMODE_DST_REG_B | \
+ OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_NSZ_A0_B3 (OP_ALG_PKMODE_COPY_NSZ | \
+ OP_ALG_PKMODE_SRC_REG_A | \
+ OP_ALG_PKMODE_DST_REG_B | \
+ OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_NSZ_A1_B0 (OP_ALG_PKMODE_COPY_NSZ | \
+ OP_ALG_PKMODE_SRC_REG_A | \
+ OP_ALG_PKMODE_SRC_SEG_1 | \
+ OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_NSZ_A1_B1 (OP_ALG_PKMODE_COPY_NSZ | \
+ OP_ALG_PKMODE_SRC_REG_A | \
+ OP_ALG_PKMODE_SRC_SEG_1 | \
+ OP_ALG_PKMODE_DST_REG_B | \
+ OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_NSZ_A1_B2 (OP_ALG_PKMODE_COPY_NSZ | \
+ OP_ALG_PKMODE_SRC_REG_A | \
+ OP_ALG_PKMODE_SRC_SEG_1 | \
+ OP_ALG_PKMODE_DST_REG_B | \
+ OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_NSZ_A1_B3 (OP_ALG_PKMODE_COPY_NSZ | \
+ OP_ALG_PKMODE_SRC_REG_A | \
+ OP_ALG_PKMODE_SRC_SEG_1 | \
+ OP_ALG_PKMODE_DST_REG_B | \
+ OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_NSZ_A2_B0 (OP_ALG_PKMODE_COPY_NSZ | \
+ OP_ALG_PKMODE_SRC_REG_A | \
+ OP_ALG_PKMODE_SRC_SEG_2 | \
+ OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_NSZ_A2_B1 (OP_ALG_PKMODE_COPY_NSZ | \
+ OP_ALG_PKMODE_SRC_REG_A | \
+ OP_ALG_PKMODE_SRC_SEG_2 | \
+ OP_ALG_PKMODE_DST_REG_B | \
+ OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_NSZ_A2_B2 (OP_ALG_PKMODE_COPY_NSZ | \
+ OP_ALG_PKMODE_SRC_REG_A | \
+ OP_ALG_PKMODE_SRC_SEG_2 | \
+ OP_ALG_PKMODE_DST_REG_B | \
+ OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_NSZ_A2_B3 (OP_ALG_PKMODE_COPY_NSZ | \
+ OP_ALG_PKMODE_SRC_REG_A | \
+ OP_ALG_PKMODE_SRC_SEG_2 | \
+ OP_ALG_PKMODE_DST_REG_B | \
+ OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_NSZ_A3_B0 (OP_ALG_PKMODE_COPY_NSZ | \
+ OP_ALG_PKMODE_SRC_REG_A | \
+ OP_ALG_PKMODE_SRC_SEG_3 | \
+ OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_NSZ_A3_B1 (OP_ALG_PKMODE_COPY_NSZ | \
+ OP_ALG_PKMODE_SRC_REG_A | \
+ OP_ALG_PKMODE_SRC_SEG_3 | \
+ OP_ALG_PKMODE_DST_REG_B | \
+ OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_NSZ_A3_B2 (OP_ALG_PKMODE_COPY_NSZ | \
+ OP_ALG_PKMODE_SRC_REG_A | \
+ OP_ALG_PKMODE_SRC_SEG_3 | \
+ OP_ALG_PKMODE_DST_REG_B | \
+ OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_NSZ_A3_B3 (OP_ALG_PKMODE_COPY_NSZ | \
+ OP_ALG_PKMODE_SRC_REG_A | \
+ OP_ALG_PKMODE_SRC_SEG_3 | \
+ OP_ALG_PKMODE_DST_REG_B | \
+ OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_NSZ_B0_A0 (OP_ALG_PKMODE_COPY_NSZ | \
+ OP_ALG_PKMODE_SRC_REG_B | \
+ OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_NSZ_B0_A1 (OP_ALG_PKMODE_COPY_NSZ | \
+ OP_ALG_PKMODE_SRC_REG_B | \
+ OP_ALG_PKMODE_DST_REG_A | \
+ OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_NSZ_B0_A2 (OP_ALG_PKMODE_COPY_NSZ | \
+ OP_ALG_PKMODE_SRC_REG_B | \
+ OP_ALG_PKMODE_DST_REG_A | \
+ OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_NSZ_B0_A3 (OP_ALG_PKMODE_COPY_NSZ | \
+ OP_ALG_PKMODE_SRC_REG_B | \
+ OP_ALG_PKMODE_DST_REG_A | \
+ OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_NSZ_B1_A0 (OP_ALG_PKMODE_COPY_NSZ | \
+ OP_ALG_PKMODE_SRC_REG_B | \
+ OP_ALG_PKMODE_SRC_SEG_1 | \
+ OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_NSZ_B1_A1 (OP_ALG_PKMODE_COPY_NSZ | \
+ OP_ALG_PKMODE_SRC_REG_B | \
+ OP_ALG_PKMODE_SRC_SEG_1 | \
+ OP_ALG_PKMODE_DST_REG_A | \
+ OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_NSZ_B1_A2 (OP_ALG_PKMODE_COPY_NSZ | \
+ OP_ALG_PKMODE_SRC_REG_B | \
+ OP_ALG_PKMODE_SRC_SEG_1 | \
+ OP_ALG_PKMODE_DST_REG_A | \
+ OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_NSZ_B1_A3 (OP_ALG_PKMODE_COPY_NSZ | \
+ OP_ALG_PKMODE_SRC_REG_B | \
+ OP_ALG_PKMODE_SRC_SEG_1 | \
+ OP_ALG_PKMODE_DST_REG_A | \
+ OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_NSZ_B2_A0 (OP_ALG_PKMODE_COPY_NSZ | \
+ OP_ALG_PKMODE_SRC_REG_B | \
+ OP_ALG_PKMODE_SRC_SEG_2 | \
+ OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_NSZ_B2_A1 (OP_ALG_PKMODE_COPY_NSZ | \
+ OP_ALG_PKMODE_SRC_REG_B | \
+ OP_ALG_PKMODE_SRC_SEG_2 | \
+ OP_ALG_PKMODE_DST_REG_A | \
+ OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_NSZ_B2_A2 (OP_ALG_PKMODE_COPY_NSZ | \
+ OP_ALG_PKMODE_SRC_REG_B | \
+ OP_ALG_PKMODE_SRC_SEG_2 | \
+ OP_ALG_PKMODE_DST_REG_A | \
+ OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_NSZ_B2_A3 (OP_ALG_PKMODE_COPY_NSZ | \
+ OP_ALG_PKMODE_SRC_REG_B | \
+ OP_ALG_PKMODE_SRC_SEG_2 | \
+ OP_ALG_PKMODE_DST_REG_A | \
+ OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_NSZ_B3_A0 (OP_ALG_PKMODE_COPY_NSZ | \
+ OP_ALG_PKMODE_SRC_REG_B | \
+ OP_ALG_PKMODE_SRC_SEG_3 | \
+ OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_NSZ_B3_A1 (OP_ALG_PKMODE_COPY_NSZ | \
+ OP_ALG_PKMODE_SRC_REG_B | \
+ OP_ALG_PKMODE_SRC_SEG_3 | \
+ OP_ALG_PKMODE_DST_REG_A | \
+ OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_NSZ_B3_A2 (OP_ALG_PKMODE_COPY_NSZ | \
+ OP_ALG_PKMODE_SRC_REG_B | \
+ OP_ALG_PKMODE_SRC_SEG_3 | \
+ OP_ALG_PKMODE_DST_REG_A | \
+ OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_NSZ_B3_A3 (OP_ALG_PKMODE_COPY_NSZ | \
+ OP_ALG_PKMODE_SRC_REG_B | \
+ OP_ALG_PKMODE_SRC_SEG_3 | \
+ OP_ALG_PKMODE_DST_REG_A | \
+ OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_NSZ_A_B (OP_ALG_PKMODE_COPY_NSZ | \
+ OP_ALG_PKMODE_SRC_REG_A | \
+ OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_NSZ_A_E (OP_ALG_PKMODE_COPY_NSZ | \
+ OP_ALG_PKMODE_SRC_REG_A | \
+ OP_ALG_PKMODE_DST_REG_E)
+#define OP_ALG_PKMODE_COPY_NSZ_A_N (OP_ALG_PKMODE_COPY_NSZ | \
+ OP_ALG_PKMODE_SRC_REG_A | \
+ OP_ALG_PKMODE_DST_REG_N)
+#define OP_ALG_PKMODE_COPY_NSZ_B_A (OP_ALG_PKMODE_COPY_NSZ | \
+ OP_ALG_PKMODE_SRC_REG_B | \
+ OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_NSZ_B_E (OP_ALG_PKMODE_COPY_NSZ | \
+ OP_ALG_PKMODE_SRC_REG_B | \
+ OP_ALG_PKMODE_DST_REG_E)
+#define OP_ALG_PKMODE_COPY_NSZ_B_N (OP_ALG_PKMODE_COPY_NSZ | \
+ OP_ALG_PKMODE_SRC_REG_B | \
+ OP_ALG_PKMODE_DST_REG_N)
+#define OP_ALG_PKMODE_COPY_NSZ_N_A (OP_ALG_PKMODE_COPY_NSZ | \
+ OP_ALG_PKMODE_SRC_REG_N | \
+ OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_NSZ_N_B (OP_ALG_PKMODE_COPY_NSZ | \
+ OP_ALG_PKMODE_SRC_REG_N | \
+ OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_NSZ_N_E (OP_ALG_PKMODE_COPY_NSZ | \
+ OP_ALG_PKMODE_SRC_REG_N | \
+ OP_ALG_PKMODE_DST_REG_E)
+
+/* PKHA mode copy-memory functions - amount based on SRC SIZE */
+#define OP_ALG_PKMODE_COPY_SSZ 0x11
+#define OP_ALG_PKMODE_COPY_SSZ_A0_B0 (OP_ALG_PKMODE_COPY_SSZ | \
+ OP_ALG_PKMODE_SRC_REG_A | \
+ OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_SSZ_A0_B1 (OP_ALG_PKMODE_COPY_SSZ | \
+ OP_ALG_PKMODE_SRC_REG_A | \
+ OP_ALG_PKMODE_DST_REG_B | \
+ OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_SSZ_A0_B2 (OP_ALG_PKMODE_COPY_SSZ | \
+ OP_ALG_PKMODE_SRC_REG_A | \
+ OP_ALG_PKMODE_DST_REG_B | \
+ OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_SSZ_A0_B3 (OP_ALG_PKMODE_COPY_SSZ | \
+ OP_ALG_PKMODE_SRC_REG_A | \
+ OP_ALG_PKMODE_DST_REG_B | \
+ OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_SSZ_A1_B0 (OP_ALG_PKMODE_COPY_SSZ | \
+ OP_ALG_PKMODE_SRC_REG_A | \
+ OP_ALG_PKMODE_SRC_SEG_1 | \
+ OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_SSZ_A1_B1 (OP_ALG_PKMODE_COPY_SSZ | \
+ OP_ALG_PKMODE_SRC_REG_A | \
+ OP_ALG_PKMODE_SRC_SEG_1 | \
+ OP_ALG_PKMODE_DST_REG_B | \
+ OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_SSZ_A1_B2 (OP_ALG_PKMODE_COPY_SSZ | \
+ OP_ALG_PKMODE_SRC_REG_A | \
+ OP_ALG_PKMODE_SRC_SEG_1 | \
+ OP_ALG_PKMODE_DST_REG_B | \
+ OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_SSZ_A1_B3 (OP_ALG_PKMODE_COPY_SSZ | \
+ OP_ALG_PKMODE_SRC_REG_A | \
+ OP_ALG_PKMODE_SRC_SEG_1 | \
+ OP_ALG_PKMODE_DST_REG_B | \
+ OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_SSZ_A2_B0 (OP_ALG_PKMODE_COPY_SSZ | \
+ OP_ALG_PKMODE_SRC_REG_A | \
+ OP_ALG_PKMODE_SRC_SEG_2 | \
+ OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_SSZ_A2_B1 (OP_ALG_PKMODE_COPY_SSZ | \
+ OP_ALG_PKMODE_SRC_REG_A | \
+ OP_ALG_PKMODE_SRC_SEG_2 | \
+ OP_ALG_PKMODE_DST_REG_B | \
+ OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_SSZ_A2_B2 (OP_ALG_PKMODE_COPY_SSZ | \
+ OP_ALG_PKMODE_SRC_REG_A | \
+ OP_ALG_PKMODE_SRC_SEG_2 | \
+ OP_ALG_PKMODE_DST_REG_B | \
+ OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_SSZ_A2_B3 (OP_ALG_PKMODE_COPY_SSZ | \
+ OP_ALG_PKMODE_SRC_REG_A | \
+ OP_ALG_PKMODE_SRC_SEG_2 | \
+ OP_ALG_PKMODE_DST_REG_B | \
+ OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_SSZ_A3_B0 (OP_ALG_PKMODE_COPY_SSZ | \
+ OP_ALG_PKMODE_SRC_REG_A | \
+ OP_ALG_PKMODE_SRC_SEG_3 | \
+ OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_SSZ_A3_B1 (OP_ALG_PKMODE_COPY_SSZ | \
+ OP_ALG_PKMODE_SRC_REG_A | \
+ OP_ALG_PKMODE_SRC_SEG_3 | \
+ OP_ALG_PKMODE_DST_REG_B | \
+ OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_SSZ_A3_B2 (OP_ALG_PKMODE_COPY_SSZ | \
+ OP_ALG_PKMODE_SRC_REG_A | \
+ OP_ALG_PKMODE_SRC_SEG_3 | \
+ OP_ALG_PKMODE_DST_REG_B | \
+ OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_SSZ_A3_B3 (OP_ALG_PKMODE_COPY_SSZ | \
+ OP_ALG_PKMODE_SRC_REG_A | \
+ OP_ALG_PKMODE_SRC_SEG_3 | \
+ OP_ALG_PKMODE_DST_REG_B | \
+ OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_SSZ_B0_A0 (OP_ALG_PKMODE_COPY_SSZ | \
+ OP_ALG_PKMODE_SRC_REG_B | \
+ OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_SSZ_B0_A1 (OP_ALG_PKMODE_COPY_SSZ | \
+ OP_ALG_PKMODE_SRC_REG_B | \
+ OP_ALG_PKMODE_DST_REG_A | \
+ OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_SSZ_B0_A2 (OP_ALG_PKMODE_COPY_SSZ | \
+ OP_ALG_PKMODE_SRC_REG_B | \
+ OP_ALG_PKMODE_DST_REG_A | \
+ OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_SSZ_B0_A3 (OP_ALG_PKMODE_COPY_SSZ | \
+ OP_ALG_PKMODE_SRC_REG_B | \
+ OP_ALG_PKMODE_DST_REG_A | \
+ OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_SSZ_B1_A0 (OP_ALG_PKMODE_COPY_SSZ | \
+ OP_ALG_PKMODE_SRC_REG_B | \
+ OP_ALG_PKMODE_SRC_SEG_1 | \
+ OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_SSZ_B1_A1 (OP_ALG_PKMODE_COPY_SSZ | \
+ OP_ALG_PKMODE_SRC_REG_B | \
+ OP_ALG_PKMODE_SRC_SEG_1 | \
+ OP_ALG_PKMODE_DST_REG_A | \
+ OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_SSZ_B1_A2 (OP_ALG_PKMODE_COPY_SSZ | \
+ OP_ALG_PKMODE_SRC_REG_B | \
+ OP_ALG_PKMODE_SRC_SEG_1 | \
+ OP_ALG_PKMODE_DST_REG_A | \
+ OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_SSZ_B1_A3 (OP_ALG_PKMODE_COPY_SSZ | \
+ OP_ALG_PKMODE_SRC_REG_B | \
+ OP_ALG_PKMODE_SRC_SEG_1 | \
+ OP_ALG_PKMODE_DST_REG_A | \
+ OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_SSZ_B2_A0 (OP_ALG_PKMODE_COPY_SSZ | \
+ OP_ALG_PKMODE_SRC_REG_B | \
+ OP_ALG_PKMODE_SRC_SEG_2 | \
+ OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_SSZ_B2_A1 (OP_ALG_PKMODE_COPY_SSZ | \
+ OP_ALG_PKMODE_SRC_REG_B | \
+ OP_ALG_PKMODE_SRC_SEG_2 | \
+ OP_ALG_PKMODE_DST_REG_A | \
+ OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_SSZ_B2_A2 (OP_ALG_PKMODE_COPY_SSZ | \
+ OP_ALG_PKMODE_SRC_REG_B | \
+ OP_ALG_PKMODE_SRC_SEG_2 | \
+ OP_ALG_PKMODE_DST_REG_A | \
+ OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_SSZ_B2_A3 (OP_ALG_PKMODE_COPY_SSZ | \
+ OP_ALG_PKMODE_SRC_REG_B | \
+ OP_ALG_PKMODE_SRC_SEG_2 | \
+ OP_ALG_PKMODE_DST_REG_A | \
+ OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_SSZ_B3_A0 (OP_ALG_PKMODE_COPY_SSZ | \
+ OP_ALG_PKMODE_SRC_REG_B | \
+ OP_ALG_PKMODE_SRC_SEG_3 | \
+ OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_SSZ_B3_A1 (OP_ALG_PKMODE_COPY_SSZ | \
+ OP_ALG_PKMODE_SRC_REG_B | \
+ OP_ALG_PKMODE_SRC_SEG_3 | \
+ OP_ALG_PKMODE_DST_REG_A | \
+ OP_ALG_PKMODE_DST_SEG_1)
+#define OP_ALG_PKMODE_COPY_SSZ_B3_A2 (OP_ALG_PKMODE_COPY_SSZ | \
+ OP_ALG_PKMODE_SRC_REG_B | \
+ OP_ALG_PKMODE_SRC_SEG_3 | \
+ OP_ALG_PKMODE_DST_REG_A | \
+ OP_ALG_PKMODE_DST_SEG_2)
+#define OP_ALG_PKMODE_COPY_SSZ_B3_A3 (OP_ALG_PKMODE_COPY_SSZ | \
+ OP_ALG_PKMODE_SRC_REG_B | \
+ OP_ALG_PKMODE_SRC_SEG_3 | \
+ OP_ALG_PKMODE_DST_REG_A | \
+ OP_ALG_PKMODE_DST_SEG_3)
+
+#define OP_ALG_PKMODE_COPY_SSZ_A_B (OP_ALG_PKMODE_COPY_SSZ | \
+ OP_ALG_PKMODE_SRC_REG_A | \
+ OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_SSZ_A_E (OP_ALG_PKMODE_COPY_SSZ | \
+ OP_ALG_PKMODE_SRC_REG_A | \
+ OP_ALG_PKMODE_DST_REG_E)
+#define OP_ALG_PKMODE_COPY_SSZ_A_N (OP_ALG_PKMODE_COPY_SSZ | \
+ OP_ALG_PKMODE_SRC_REG_A | \
+ OP_ALG_PKMODE_DST_REG_N)
+#define OP_ALG_PKMODE_COPY_SSZ_B_A (OP_ALG_PKMODE_COPY_SSZ | \
+ OP_ALG_PKMODE_SRC_REG_B | \
+ OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_SSZ_B_E (OP_ALG_PKMODE_COPY_SSZ | \
+ OP_ALG_PKMODE_SRC_REG_B | \
+ OP_ALG_PKMODE_DST_REG_E)
+#define OP_ALG_PKMODE_COPY_SSZ_B_N (OP_ALG_PKMODE_COPY_SSZ | \
+ OP_ALG_PKMODE_SRC_REG_B | \
+ OP_ALG_PKMODE_DST_REG_N)
+#define OP_ALG_PKMODE_COPY_SSZ_N_A (OP_ALG_PKMODE_COPY_SSZ | \
+ OP_ALG_PKMODE_SRC_REG_N | \
+ OP_ALG_PKMODE_DST_REG_A)
+#define OP_ALG_PKMODE_COPY_SSZ_N_B (OP_ALG_PKMODE_COPY_SSZ | \
+ OP_ALG_PKMODE_SRC_REG_N | \
+ OP_ALG_PKMODE_DST_REG_B)
+#define OP_ALG_PKMODE_COPY_SSZ_N_E (OP_ALG_PKMODE_COPY_SSZ | \
+ OP_ALG_PKMODE_SRC_REG_N | \
+ OP_ALG_PKMODE_DST_REG_E)

/*
* SEQ_IN_PTR Command Constructs
*/

/* Release Buffers */
-#define SQIN_RBS 0x04000000
+#define SQIN_RBS BIT(26)

/* Sequence pointer is really a descriptor */
-#define SQIN_INL 0x02000000
+#define SQIN_INL BIT(25)

/* Sequence pointer is a scatter-gather table */
-#define SQIN_SGF 0x01000000
+#define SQIN_SGF BIT(24)

/* Appends to a previous pointer */
-#define SQIN_PRE 0x00800000
+#define SQIN_PRE BIT(23)

/* Use extended length following pointer */
-#define SQIN_EXT 0x00400000
+#define SQIN_EXT BIT(22)

/* Restore sequence with pointer/length */
-#define SQIN_RTO 0x00200000
+#define SQIN_RTO BIT(21)

/* Replace job descriptor */
-#define SQIN_RJD 0x00100000
+#define SQIN_RJD BIT(20)

-#define SQIN_LEN_SHIFT 0
-#define SQIN_LEN_MASK (0xffff << SQIN_LEN_SHIFT)
+/* Sequence Out Pointer - start a new input sequence using output sequence */
+#define SQIN_SOP BIT(19)
+
+#define SQIN_LEN_SHIFT 0
+#define SQIN_LEN_MASK (0xffff << SQIN_LEN_SHIFT)

/*
* SEQ_OUT_PTR Command Constructs
*/

/* Sequence pointer is a scatter-gather table */
-#define SQOUT_SGF 0x01000000
+#define SQOUT_SGF BIT(24)

/* Appends to a previous pointer */
-#define SQOUT_PRE SQIN_PRE
+#define SQOUT_PRE BIT(23)

/* Restore sequence with pointer/length */
-#define SQOUT_RTO SQIN_RTO
+#define SQOUT_RTO BIT(21)
+
+/*
+ * Ignore length field, add current output frame length back to SOL register.
+ * Reset tracking length of bytes written to output frame.
+ * Must be used together with SQOUT_RTO.
+ */
+#define SQOUT_RST BIT(20)
+
+/* Allow "write safe" transactions for this Output Sequence */
+#define SQOUT_EWS BIT(19)

/* Use extended length following pointer */
-#define SQOUT_EXT 0x00400000
+#define SQOUT_EXT BIT(22)

-#define SQOUT_LEN_SHIFT 0
-#define SQOUT_LEN_MASK (0xffff << SQOUT_LEN_SHIFT)
+#define SQOUT_LEN_SHIFT 0
+#define SQOUT_LEN_MASK (0xffff << SQOUT_LEN_SHIFT)


/*
@@ -1328,7 +2200,7 @@

#define MOVE_WAITCOMP_SHIFT 24
#define MOVE_WAITCOMP_MASK (1 << MOVE_WAITCOMP_SHIFT)
-#define MOVE_WAITCOMP (1 << MOVE_WAITCOMP_SHIFT)
+#define MOVE_WAITCOMP BIT(24)

#define MOVE_SRC_SHIFT 20
#define MOVE_SRC_MASK (0x0f << MOVE_SRC_SHIFT)
@@ -1342,6 +2214,7 @@
#define MOVE_SRC_MATH3 (0x07 << MOVE_SRC_SHIFT)
#define MOVE_SRC_INFIFO (0x08 << MOVE_SRC_SHIFT)
#define MOVE_SRC_INFIFO_CL (0x09 << MOVE_SRC_SHIFT)
+#define MOVE_SRC_INFIFO_NO_NFIFO (0x0a << MOVE_SRC_SHIFT)

#define MOVE_DEST_SHIFT 16
#define MOVE_DEST_MASK (0x0f << MOVE_DEST_SHIFT)
@@ -1355,10 +2228,11 @@
#define MOVE_DEST_MATH3 (0x07 << MOVE_DEST_SHIFT)
#define MOVE_DEST_CLASS1INFIFO (0x08 << MOVE_DEST_SHIFT)
#define MOVE_DEST_CLASS2INFIFO (0x09 << MOVE_DEST_SHIFT)
-#define MOVE_DEST_INFIFO_NOINFO (0x0a << MOVE_DEST_SHIFT)
+#define MOVE_DEST_INFIFO (0x0a << MOVE_DEST_SHIFT)
#define MOVE_DEST_PK_A (0x0c << MOVE_DEST_SHIFT)
#define MOVE_DEST_CLASS1KEY (0x0d << MOVE_DEST_SHIFT)
#define MOVE_DEST_CLASS2KEY (0x0e << MOVE_DEST_SHIFT)
+#define MOVE_DEST_ALTSOURCE (0x0f << MOVE_DEST_SHIFT)

#define MOVE_OFFSET_SHIFT 8
#define MOVE_OFFSET_MASK (0xff << MOVE_OFFSET_SHIFT)
@@ -1368,6 +2242,16 @@

#define MOVELEN_MRSEL_SHIFT 0
#define MOVELEN_MRSEL_MASK (0x3 << MOVE_LEN_SHIFT)
+#define MOVELEN_MRSEL_MATH0 (0 << MOVELEN_MRSEL_SHIFT)
+#define MOVELEN_MRSEL_MATH1 (1 << MOVELEN_MRSEL_SHIFT)
+#define MOVELEN_MRSEL_MATH2 (2 << MOVELEN_MRSEL_SHIFT)
+#define MOVELEN_MRSEL_MATH3 (3 << MOVELEN_MRSEL_SHIFT)
+
+#define MOVELEN_SIZE_SHIFT 6
+#define MOVELEN_SIZE_MASK (0x3 << MOVELEN_SIZE_SHIFT)
+#define MOVELEN_SIZE_WORD (0x01 << MOVELEN_SIZE_SHIFT)
+#define MOVELEN_SIZE_BYTE (0x02 << MOVELEN_SIZE_SHIFT)
+#define MOVELEN_SIZE_DWORD (0x03 << MOVELEN_SIZE_SHIFT)

/*
* MATH Command Constructs
@@ -1375,15 +2259,24 @@

#define MATH_IFB_SHIFT 26
#define MATH_IFB_MASK (1 << MATH_IFB_SHIFT)
-#define MATH_IFB (1 << MATH_IFB_SHIFT)
+#define MATH_IFB BIT(26)

#define MATH_NFU_SHIFT 25
#define MATH_NFU_MASK (1 << MATH_NFU_SHIFT)
-#define MATH_NFU (1 << MATH_NFU_SHIFT)
+#define MATH_NFU BIT(25)

+/* STL for MATH, SSEL for MATHI */
#define MATH_STL_SHIFT 24
#define MATH_STL_MASK (1 << MATH_STL_SHIFT)
-#define MATH_STL (1 << MATH_STL_SHIFT)
+#define MATH_STL BIT(24)
+
+#define MATH_SSEL_SHIFT 24
+#define MATH_SSEL_MASK (1 << MATH_SSEL_SHIFT)
+#define MATH_SSEL BIT(24)
+
+#define MATH_SWP_SHIFT 0
+#define MATH_SWP_MASK (1 << MATH_SWP_SHIFT)
+#define MATH_SWP BIT(0)

/* Function selectors */
#define MATH_FUN_SHIFT 20
@@ -1398,7 +2291,9 @@
#define MATH_FUN_LSHIFT (0x07 << MATH_FUN_SHIFT)
#define MATH_FUN_RSHIFT (0x08 << MATH_FUN_SHIFT)
#define MATH_FUN_SHLD (0x09 << MATH_FUN_SHIFT)
-#define MATH_FUN_ZBYT (0x0a << MATH_FUN_SHIFT)
+#define MATH_FUN_ZBYT (0x0a << MATH_FUN_SHIFT) /* ZBYT is for MATH */
+#define MATH_FUN_FBYT (0x0a << MATH_FUN_SHIFT) /* FBYT is for MATHI */
+#define MATH_FUN_BSWAP (0x0b << MATH_FUN_SHIFT)

/* Source 0 selectors */
#define MATH_SRC0_SHIFT 16
@@ -1414,33 +2309,45 @@
#define MATH_SRC0_VARSEQINLEN (0x0a << MATH_SRC0_SHIFT)
#define MATH_SRC0_VARSEQOUTLEN (0x0b << MATH_SRC0_SHIFT)
#define MATH_SRC0_ZERO (0x0c << MATH_SRC0_SHIFT)
+#define MATH_SRC0_ONE (0x0f << MATH_SRC0_SHIFT)

/* Source 1 selectors */
#define MATH_SRC1_SHIFT 12
+#define MATHI_SRC1_SHIFT 16
#define MATH_SRC1_MASK (0x0f << MATH_SRC1_SHIFT)
#define MATH_SRC1_REG0 (0x00 << MATH_SRC1_SHIFT)
#define MATH_SRC1_REG1 (0x01 << MATH_SRC1_SHIFT)
#define MATH_SRC1_REG2 (0x02 << MATH_SRC1_SHIFT)
#define MATH_SRC1_REG3 (0x03 << MATH_SRC1_SHIFT)
#define MATH_SRC1_IMM (0x04 << MATH_SRC1_SHIFT)
-#define MATH_SRC1_DPOVRD (0x07 << MATH_SRC0_SHIFT)
+#define MATH_SRC1_DPOVRD (0x07 << MATH_SRC1_SHIFT)
+#define MATH_SRC1_VARSEQINLEN (0x08 << MATH_SRC1_SHIFT)
+#define MATH_SRC1_VARSEQOUTLEN (0x09 << MATH_SRC1_SHIFT)
#define MATH_SRC1_INFIFO (0x0a << MATH_SRC1_SHIFT)
#define MATH_SRC1_OUTFIFO (0x0b << MATH_SRC1_SHIFT)
#define MATH_SRC1_ONE (0x0c << MATH_SRC1_SHIFT)
+#define MATH_SRC1_JOBSOURCE (0x0d << MATH_SRC1_SHIFT)
+#define MATH_SRC1_ZERO (0x0f << MATH_SRC1_SHIFT)

/* Destination selectors */
#define MATH_DEST_SHIFT 8
+#define MATHI_DEST_SHIFT 12
#define MATH_DEST_MASK (0x0f << MATH_DEST_SHIFT)
#define MATH_DEST_REG0 (0x00 << MATH_DEST_SHIFT)
#define MATH_DEST_REG1 (0x01 << MATH_DEST_SHIFT)
#define MATH_DEST_REG2 (0x02 << MATH_DEST_SHIFT)
#define MATH_DEST_REG3 (0x03 << MATH_DEST_SHIFT)
+#define MATH_DEST_DPOVRD (0x07 << MATH_DEST_SHIFT)
#define MATH_DEST_SEQINLEN (0x08 << MATH_DEST_SHIFT)
#define MATH_DEST_SEQOUTLEN (0x09 << MATH_DEST_SHIFT)
#define MATH_DEST_VARSEQINLEN (0x0a << MATH_DEST_SHIFT)
#define MATH_DEST_VARSEQOUTLEN (0x0b << MATH_DEST_SHIFT)
#define MATH_DEST_NONE (0x0f << MATH_DEST_SHIFT)

+/* MATHI Immediate value */
+#define MATHI_IMM_SHIFT 4
+#define MATHI_IMM_MASK (0xff << MATHI_IMM_SHIFT)
+
/* Length selectors */
#define MATH_LEN_SHIFT 0
#define MATH_LEN_MASK (0x0f << MATH_LEN_SHIFT)
@@ -1462,14 +2369,18 @@

#define JUMP_JSL_SHIFT 24
#define JUMP_JSL_MASK (1 << JUMP_JSL_SHIFT)
-#define JUMP_JSL (1 << JUMP_JSL_SHIFT)
+#define JUMP_JSL BIT(24)

-#define JUMP_TYPE_SHIFT 22
-#define JUMP_TYPE_MASK (0x03 << JUMP_TYPE_SHIFT)
+#define JUMP_TYPE_SHIFT 20
+#define JUMP_TYPE_MASK (0x0f << JUMP_TYPE_SHIFT)
#define JUMP_TYPE_LOCAL (0x00 << JUMP_TYPE_SHIFT)
-#define JUMP_TYPE_NONLOCAL (0x01 << JUMP_TYPE_SHIFT)
-#define JUMP_TYPE_HALT (0x02 << JUMP_TYPE_SHIFT)
-#define JUMP_TYPE_HALT_USER (0x03 << JUMP_TYPE_SHIFT)
+#define JUMP_TYPE_LOCAL_INC (0x01 << JUMP_TYPE_SHIFT)
+#define JUMP_TYPE_GOSUB (0x02 << JUMP_TYPE_SHIFT)
+#define JUMP_TYPE_LOCAL_DEC (0x03 << JUMP_TYPE_SHIFT)
+#define JUMP_TYPE_NONLOCAL (0x04 << JUMP_TYPE_SHIFT)
+#define JUMP_TYPE_RETURN (0x06 << JUMP_TYPE_SHIFT)
+#define JUMP_TYPE_HALT (0x08 << JUMP_TYPE_SHIFT)
+#define JUMP_TYPE_HALT_USER (0x0c << JUMP_TYPE_SHIFT)

#define JUMP_TEST_SHIFT 16
#define JUMP_TEST_MASK (0x03 << JUMP_TEST_SHIFT)
@@ -1480,23 +2391,36 @@

/* Condition codes. JSL bit is factored in */
#define JUMP_COND_SHIFT 8
-#define JUMP_COND_MASK (0x100ff << JUMP_COND_SHIFT)
-#define JUMP_COND_PK_0 (0x80 << JUMP_COND_SHIFT)
-#define JUMP_COND_PK_GCD_1 (0x40 << JUMP_COND_SHIFT)
-#define JUMP_COND_PK_PRIME (0x20 << JUMP_COND_SHIFT)
-#define JUMP_COND_MATH_N (0x08 << JUMP_COND_SHIFT)
-#define JUMP_COND_MATH_Z (0x04 << JUMP_COND_SHIFT)
-#define JUMP_COND_MATH_C (0x02 << JUMP_COND_SHIFT)
-#define JUMP_COND_MATH_NV (0x01 << JUMP_COND_SHIFT)
-
-#define JUMP_COND_JRP ((0x80 << JUMP_COND_SHIFT) | JUMP_JSL)
-#define JUMP_COND_SHRD ((0x40 << JUMP_COND_SHIFT) | JUMP_JSL)
-#define JUMP_COND_SELF ((0x20 << JUMP_COND_SHIFT) | JUMP_JSL)
-#define JUMP_COND_CALM ((0x10 << JUMP_COND_SHIFT) | JUMP_JSL)
-#define JUMP_COND_NIP ((0x08 << JUMP_COND_SHIFT) | JUMP_JSL)
-#define JUMP_COND_NIFP ((0x04 << JUMP_COND_SHIFT) | JUMP_JSL)
-#define JUMP_COND_NOP ((0x02 << JUMP_COND_SHIFT) | JUMP_JSL)
-#define JUMP_COND_NCP ((0x01 << JUMP_COND_SHIFT) | JUMP_JSL)
+#define JUMP_COND_MASK ((0xff << JUMP_COND_SHIFT) | JUMP_JSL)
+#define JUMP_COND_PK_0 BIT(15)
+#define JUMP_COND_PK_GCD_1 BIT(14)
+#define JUMP_COND_PK_PRIME BIT(13)
+#define JUMP_COND_MATH_N BIT(11)
+#define JUMP_COND_MATH_Z BIT(10)
+#define JUMP_COND_MATH_C BIT(9)
+#define JUMP_COND_MATH_NV BIT(8)
+
+#define JUMP_COND_JQP (BIT(15) | JUMP_JSL)
+#define JUMP_COND_SHRD (BIT(14) | JUMP_JSL)
+#define JUMP_COND_SELF (BIT(13) | JUMP_JSL)
+#define JUMP_COND_CALM (BIT(12) | JUMP_JSL)
+#define JUMP_COND_NIP (BIT(11) | JUMP_JSL)
+#define JUMP_COND_NIFP (BIT(10) | JUMP_JSL)
+#define JUMP_COND_NOP (BIT(9) | JUMP_JSL)
+#define JUMP_COND_NCP (BIT(8) | JUMP_JSL)
+
+/* Source / destination selectors */
+#define JUMP_SRC_DST_SHIFT 12
+#define JUMP_SRC_DST_MASK (0x0f << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_MATH0 (0x00 << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_MATH1 (0x01 << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_MATH2 (0x02 << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_MATH3 (0x03 << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_DPOVRD (0x07 << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_SEQINLEN (0x08 << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_SEQOUTLEN (0x09 << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_VARSEQINLEN (0x0a << JUMP_SRC_DST_SHIFT)
+#define JUMP_SRC_DST_VARSEQOUTLEN (0x0b << JUMP_SRC_DST_SHIFT)

#define JUMP_OFFSET_SHIFT 0
#define JUMP_OFFSET_MASK (0xff << JUMP_OFFSET_SHIFT)
@@ -1507,27 +2431,27 @@
*
*/
#define NFIFOENTRY_DEST_SHIFT 30
-#define NFIFOENTRY_DEST_MASK (3 << NFIFOENTRY_DEST_SHIFT)
+#define NFIFOENTRY_DEST_MASK ((uint32_t)(3 << NFIFOENTRY_DEST_SHIFT))
#define NFIFOENTRY_DEST_DECO (0 << NFIFOENTRY_DEST_SHIFT)
#define NFIFOENTRY_DEST_CLASS1 (1 << NFIFOENTRY_DEST_SHIFT)
-#define NFIFOENTRY_DEST_CLASS2 (2 << NFIFOENTRY_DEST_SHIFT)
-#define NFIFOENTRY_DEST_BOTH (3 << NFIFOENTRY_DEST_SHIFT)
+#define NFIFOENTRY_DEST_CLASS2 ((uint32_t)(2 << NFIFOENTRY_DEST_SHIFT))
+#define NFIFOENTRY_DEST_BOTH ((uint32_t)(3 << NFIFOENTRY_DEST_SHIFT))

#define NFIFOENTRY_LC2_SHIFT 29
#define NFIFOENTRY_LC2_MASK (1 << NFIFOENTRY_LC2_SHIFT)
-#define NFIFOENTRY_LC2 (1 << NFIFOENTRY_LC2_SHIFT)
+#define NFIFOENTRY_LC2 BIT(29)

#define NFIFOENTRY_LC1_SHIFT 28
#define NFIFOENTRY_LC1_MASK (1 << NFIFOENTRY_LC1_SHIFT)
-#define NFIFOENTRY_LC1 (1 << NFIFOENTRY_LC1_SHIFT)
+#define NFIFOENTRY_LC1 BIT(28)

#define NFIFOENTRY_FC2_SHIFT 27
#define NFIFOENTRY_FC2_MASK (1 << NFIFOENTRY_FC2_SHIFT)
-#define NFIFOENTRY_FC2 (1 << NFIFOENTRY_FC2_SHIFT)
+#define NFIFOENTRY_FC2 BIT(27)

#define NFIFOENTRY_FC1_SHIFT 26
#define NFIFOENTRY_FC1_MASK (1 << NFIFOENTRY_FC1_SHIFT)
-#define NFIFOENTRY_FC1 (1 << NFIFOENTRY_FC1_SHIFT)
+#define NFIFOENTRY_FC1 BIT(26)

#define NFIFOENTRY_STYPE_SHIFT 24
#define NFIFOENTRY_STYPE_MASK (3 << NFIFOENTRY_STYPE_SHIFT)
@@ -1535,6 +2459,12 @@
#define NFIFOENTRY_STYPE_OFIFO (1 << NFIFOENTRY_STYPE_SHIFT)
#define NFIFOENTRY_STYPE_PAD (2 << NFIFOENTRY_STYPE_SHIFT)
#define NFIFOENTRY_STYPE_SNOOP (3 << NFIFOENTRY_STYPE_SHIFT)
+#define NFIFOENTRY_STYPE_ALTSOURCE ((0 << NFIFOENTRY_STYPE_SHIFT) \
+ | NFIFOENTRY_AST)
+#define NFIFOENTRY_STYPE_OFIFO_SYNC ((1 << NFIFOENTRY_STYPE_SHIFT) \
+ | NFIFOENTRY_AST)
+#define NFIFOENTRY_STYPE_SNOOP_ALT ((3 << NFIFOENTRY_STYPE_SHIFT) \
+ | NFIFOENTRY_AST)

#define NFIFOENTRY_DTYPE_SHIFT 20
#define NFIFOENTRY_DTYPE_MASK (0xF << NFIFOENTRY_DTYPE_SHIFT)
@@ -1560,10 +2490,9 @@
#define NFIFOENTRY_DTYPE_PK_A (0xC << NFIFOENTRY_DTYPE_SHIFT)
#define NFIFOENTRY_DTYPE_PK_B (0xD << NFIFOENTRY_DTYPE_SHIFT)

-
#define NFIFOENTRY_BND_SHIFT 19
#define NFIFOENTRY_BND_MASK (1 << NFIFOENTRY_BND_SHIFT)
-#define NFIFOENTRY_BND (1 << NFIFOENTRY_BND_SHIFT)
+#define NFIFOENTRY_BND BIT(19)

#define NFIFOENTRY_PTYPE_SHIFT 16
#define NFIFOENTRY_PTYPE_MASK (0x7 << NFIFOENTRY_PTYPE_SHIFT)
@@ -1579,19 +2508,23 @@

#define NFIFOENTRY_OC_SHIFT 15
#define NFIFOENTRY_OC_MASK (1 << NFIFOENTRY_OC_SHIFT)
-#define NFIFOENTRY_OC (1 << NFIFOENTRY_OC_SHIFT)
+#define NFIFOENTRY_OC BIT(15)
+
+#define NFIFOENTRY_PR_SHIFT 15
+#define NFIFOENTRY_PR_MASK (1 << NFIFOENTRY_PR_SHIFT)
+#define NFIFOENTRY_PR BIT(15)

#define NFIFOENTRY_AST_SHIFT 14
-#define NFIFOENTRY_AST_MASK (1 << NFIFOENTRY_OC_SHIFT)
-#define NFIFOENTRY_AST (1 << NFIFOENTRY_OC_SHIFT)
+#define NFIFOENTRY_AST_MASK (1 << NFIFOENTRY_AST_SHIFT)
+#define NFIFOENTRY_AST BIT(14)

#define NFIFOENTRY_BM_SHIFT 11
#define NFIFOENTRY_BM_MASK (1 << NFIFOENTRY_BM_SHIFT)
-#define NFIFOENTRY_BM (1 << NFIFOENTRY_BM_SHIFT)
+#define NFIFOENTRY_BM BIT(11)

#define NFIFOENTRY_PS_SHIFT 10
#define NFIFOENTRY_PS_MASK (1 << NFIFOENTRY_PS_SHIFT)
-#define NFIFOENTRY_PS (1 << NFIFOENTRY_PS_SHIFT)
+#define NFIFOENTRY_PS BIT(10)

#define NFIFOENTRY_DLEN_SHIFT 0
#define NFIFOENTRY_DLEN_MASK (0xFFF << NFIFOENTRY_DLEN_SHIFT)
@@ -1600,12 +2533,12 @@
#define NFIFOENTRY_PLEN_MASK (0xFF << NFIFOENTRY_PLEN_SHIFT)

/* Append Load Immediate Command */
-#define FD_CMD_APPEND_LOAD_IMMEDIATE 0x80000000
+#define FD_CMD_APPEND_LOAD_IMMEDIATE BIT(31)

/* Set SEQ LIODN equal to the Non-SEQ LIODN for the job */
-#define FD_CMD_SET_SEQ_LIODN_EQUAL_NONSEQ_LIODN 0x40000000
+#define FD_CMD_SET_SEQ_LIODN_EQUAL_NONSEQ_LIODN BIT(30)

/* Frame Descriptor Command for Replacement Job Descriptor */
-#define FD_CMD_REPLACE_JOB_DESC 0x20000000
+#define FD_CMD_REPLACE_JOB_DESC BIT(29)

-#endif /* DESC_H */
+#endif /* __RTA_DESC_H__ */
diff --git a/drivers/crypto/caam/flib/desc/common.h b/drivers/crypto/caam/flib/desc/common.h
new file mode 100644
index 000000000000..1c69cbfb6173
--- /dev/null
+++ b/drivers/crypto/caam/flib/desc/common.h
@@ -0,0 +1,151 @@
+/* Copyright 2008-2013 Freescale Semiconductor, Inc. */
+
+#ifndef __DESC_COMMON_H__
+#define __DESC_COMMON_H__
+
+#include "flib/rta.h"
+
+/**
+ * DOC: Shared Descriptor Constructors - shared structures
+ *
+ * Data structures shared between algorithm, protocol implementations.
+ */
+
+/**
+ * enum rta_data_type - Indicates how is the data provided and how to include it
+ * in the descriptor.
+ * @RTA_DATA_PTR: Data is in memory and accessed by reference; data address is a
+ * physical (bus) address.
+ * @RTA_DATA_IMM: Data is inlined in descriptor and accessed as immediate data;
+ * data address is a virtual address.
+ * @RTA_DATA_IMM_DMA: (AIOP only) Data is inlined in descriptor and accessed as
+ * immediate data; data address is a physical (bus) address
+ * in external memory and CDMA is programmed to transfer the
+ * data into descriptor buffer being built in Workspace Area.
+ */
+enum rta_data_type {
+ RTA_DATA_PTR = 1,
+ RTA_DATA_IMM,
+ RTA_DATA_IMM_DMA
+};
+
+/**
+ * struct alginfo - Container for algorithm details
+ * @algtype: algorithm selector; for valid values, see documentation of the
+ * functions where it is used.
+ * @keylen: length of the provided algorithm key, in bytes
+ * @key: address where algorithm key resides; virtual address if key_type is
+ * RTA_DATA_IMM, physical (bus) address if key_type is RTA_DATA_PTR or
+ * RTA_DATA_IMM_DMA.
+ * @key_enc_flags: key encryption flags; see encrypt_flags parameter of KEY
+ * command for valid values.
+ * @key_type: enum rta_data_type
+ */
+struct alginfo {
+ uint32_t algtype;
+ uint32_t keylen;
+ uint64_t key;
+ uint32_t key_enc_flags;
+ enum rta_data_type key_type;
+};
+
+static inline uint32_t inline_flags(enum rta_data_type data_type)
+{
+ switch (data_type) {
+ case RTA_DATA_PTR:
+ return 0;
+ case RTA_DATA_IMM:
+ return IMMED | COPY;
+ case RTA_DATA_IMM_DMA:
+ return IMMED | DCOPY;
+ default:
+ /* warn and default to RTA_DATA_PTR */
+ pr_warn("RTA: defaulting to RTA_DATA_PTR parameter type\n");
+ return 0;
+ }
+}
+
+#define INLINE_KEY(alginfo) inline_flags(alginfo->key_type)
+
+/**
+ * rta_inline_query() - Provide indications on which data items can be inlined
+ * and which shall be referenced in a shared descriptor.
+ * @sd_base_len: Shared descriptor base length - bytes consumed by the commands,
+ * excluding the data items to be inlined (or corresponding
+ * pointer if an item is not inlined). Each cnstr_* function that
+ * generates descriptors should have a define mentioning
+ * corresponding length.
+ * @jd_len: Maximum length of the job descriptor(s) that will be used
+ * together with the shared descriptor.
+ * @data_len: Array of lengths of the data items trying to be inlined
+ * @inl_mask: 32bit mask with bit x = 1 if data item x can be inlined, 0
+ * otherwise.
+ * @count: Number of data items (size of @data_len array); must be <= 32
+ *
+ * Return: 0 if data can be inlined / referenced, negative value if not. If 0,
+ * check @inl_mask for details.
+ */
+static inline int rta_inline_query(unsigned sd_base_len, unsigned jd_len,
+ unsigned *data_len, uint32_t *inl_mask,
+ unsigned count)
+{
+ int rem_bytes = (int)(CAAM_DESC_BYTES_MAX - sd_base_len - jd_len);
+ unsigned i;
+
+ *inl_mask = 0;
+ for (i = 0; (i < count) && (rem_bytes > 0); i++) {
+ if (rem_bytes - data_len[i] -
+ (count - i - 1) * CAAM_PTR_SZ >= 0) {
+ rem_bytes -= data_len[i];
+ *inl_mask |= (1 << i);
+ } else {
+ rem_bytes -= CAAM_PTR_SZ;
+ }
+ }
+
+ return (rem_bytes >= 0) ? 0 : -1;
+}
+
+/**
+ * struct protcmd - Container for Protocol Operation Command fields
+ * @optype: command type
+ * @protid: protocol Identifier
+ * @protinfo: protocol Information
+ */
+struct protcmd {
+ uint32_t optype;
+ uint32_t protid;
+ uint16_t protinfo;
+};
+
+/**
+ * split_key_len - Compute MDHA split key length for a given algorithm
+ * @hash: Hashing algorithm selection, one of OP_ALG_ALGSEL_* - MD5, SHA1,
+ * SHA224, SHA384, SHA512.
+ *
+ * Return: MDHA split key length
+ */
+static inline uint32_t split_key_len(uint32_t hash)
+{
+ /* Sizes for MDHA pads (*not* keys): MD5, SHA1, 224, 256, 384, 512 */
+ static const uint8_t mdpadlen[] = { 16, 20, 32, 32, 64, 64 };
+ uint32_t idx;
+
+ idx = (hash & OP_ALG_ALGSEL_SUBMASK) >> OP_ALG_ALGSEL_SHIFT;
+
+ return (uint32_t)(mdpadlen[idx] * 2);
+}
+
+/**
+ * split_key_pad_len - Compute MDHA split key pad length for a given algorithm
+ * @hash: Hashing algorithm selection, one of OP_ALG_ALGSEL_* - MD5, SHA1,
+ * SHA224, SHA384, SHA512.
+ *
+ * Return: MDHA split key pad length
+ */
+static inline uint32_t split_key_pad_len(uint32_t hash)
+{
+ return ALIGN(split_key_len(hash), 16);
+}
+
+#endif /* __DESC_COMMON_H__ */
diff --git a/drivers/crypto/caam/flib/desc/jobdesc.h b/drivers/crypto/caam/flib/desc/jobdesc.h
new file mode 100644
index 000000000000..27ec739adba7
--- /dev/null
+++ b/drivers/crypto/caam/flib/desc/jobdesc.h
@@ -0,0 +1,57 @@
+/* Copyright 2008-2013 Freescale Semiconductor, Inc. */
+
+#ifndef __DESC_JOBDESC_H__
+#define __DESC_JOBDESC_H__
+
+#include "flib/rta.h"
+#include "common.h"
+
+/**
+ * DOC: Job Descriptor Constructors
+ *
+ * Job descriptors for certain tasks, like generating MDHA split keys.
+ */
+
+/**
+ * cnstr_jobdesc_mdsplitkey - Generate an MDHA split key
+ * @descbuf: pointer to buffer to hold constructed descriptor
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @alg_key: pointer to HMAC key to generate ipad/opad from
+ * @keylen: HMAC key length
+ * @cipher: HMAC algorithm selection, one of OP_ALG_ALGSEL_*
+ * The algorithm determines key size (bytes):
+ * - OP_ALG_ALGSEL_MD5 - 16
+ * - OP_ALG_ALGSEL_SHA1 - 20
+ * - OP_ALG_ALGSEL_SHA224 - 28
+ * - OP_ALG_ALGSEL_SHA256 - 32
+ * - OP_ALG_ALGSEL_SHA384 - 48
+ * - OP_ALG_ALGSEL_SHA512 - 64
+ * @padbuf: pointer to buffer to store generated ipad/opad
+ *
+ * Split keys are IPAD/OPAD pairs. For details, refer to MDHA Split Keys chapter
+ * in SEC Reference Manual.
+ *
+ * Return: size of descriptor written in words
+ */
+
+static inline int cnstr_jobdesc_mdsplitkey(uint32_t *descbuf, bool ps,
+ uint64_t alg_key, uint8_t keylen,
+ uint32_t cipher, uint64_t padbuf)
+{
+ struct program prg;
+ struct program *p = &prg;
+
+ PROGRAM_CNTXT_INIT(p, descbuf, 0);
+ if (ps)
+ PROGRAM_SET_36BIT_ADDR(p);
+ JOB_HDR(p, SHR_NEVER, 1, 0, 0);
+ KEY(p, KEY2, 0, alg_key, keylen, 0);
+ ALG_OPERATION(p, cipher, OP_ALG_AAI_HMAC, OP_ALG_AS_INIT,
+ ICV_CHECK_DISABLE, DIR_DEC);
+ FIFOLOAD(p, MSG2, 0, 0, LAST2 | IMMED | COPY);
+ JUMP(p, 1, LOCAL_JUMP, ALL_TRUE, CLASS2);
+ FIFOSTORE(p, MDHA_SPLIT_KEY, 0, padbuf, split_key_len(cipher), 0);
+ return PROGRAM_FINALIZE(p);
+}
+
+#endif /* __DESC_JOBDESC_H__ */
diff --git a/drivers/crypto/caam/jr.c b/drivers/crypto/caam/jr.c
index ec3652d62e93..01d434e20ca4 100644
--- a/drivers/crypto/caam/jr.c
+++ b/drivers/crypto/caam/jr.c
@@ -11,7 +11,7 @@
#include "compat.h"
#include "regs.h"
#include "jr.h"
-#include "desc.h"
+#include "flib/desc.h"
#include "intern.h"

struct jr_driver_data {
--
1.8.3.1
Horia Geanta
2014-08-14 12:54:26 UTC
Permalink
sec4_sg_entry structure is used only by helper functions in sg_sw_sec4.h.
Since SEC HW S/G entries are to be manipulated only indirectly, via these
functions, move sec4_sg_entry to the corresponding header.

Signed-off-by: Horia Geanta <***@freescale.com>
---
drivers/crypto/caam/desc.h | 10 ----------
drivers/crypto/caam/sg_sw_sec4.h | 10 +++++++++-
2 files changed, 9 insertions(+), 11 deletions(-)

diff --git a/drivers/crypto/caam/desc.h b/drivers/crypto/caam/desc.h
index f891a67c4786..eb8b870d03a9 100644
--- a/drivers/crypto/caam/desc.h
+++ b/drivers/crypto/caam/desc.h
@@ -8,16 +8,6 @@
#ifndef DESC_H
#define DESC_H

-struct sec4_sg_entry {
- u64 ptr;
-#define SEC4_SG_LEN_FIN 0x40000000
-#define SEC4_SG_LEN_EXT 0x80000000
- u32 len;
- u8 reserved;
- u8 buf_pool_id;
- u16 offset;
-};
-
/* Max size of any CAAM descriptor in 32-bit words, inclusive of header */
#define MAX_CAAM_DESCSIZE 64

diff --git a/drivers/crypto/caam/sg_sw_sec4.h b/drivers/crypto/caam/sg_sw_sec4.h
index a6e5b94756d4..e6fa2c226b8f 100644
--- a/drivers/crypto/caam/sg_sw_sec4.h
+++ b/drivers/crypto/caam/sg_sw_sec4.h
@@ -5,7 +5,15 @@
*
*/

-struct sec4_sg_entry;
+struct sec4_sg_entry {
+ u64 ptr;
+#define SEC4_SG_LEN_FIN 0x40000000
+#define SEC4_SG_LEN_EXT 0x80000000
+ u32 len;
+ u8 reserved;
+ u8 buf_pool_id;
+ u16 offset;
+};

/*
* convert single dma address to h/w link table format
--
1.8.3.1
Horia Geanta
2014-08-14 12:54:31 UTC
Permalink
desc_constr.h no longer has users, being replaced by RTA,
so get rid of it.

pdb.h is removed since its structures are not currently used.
Future protocol descriptors will add these when needed
in flib/desc/ directory.

Signed-off-by: Horia Geanta <***@freescale.com>
---
drivers/crypto/caam/desc_constr.h | 384 ------------------------------------
drivers/crypto/caam/pdb.h | 402 --------------------------------------
2 files changed, 786 deletions(-)
delete mode 100644 drivers/crypto/caam/desc_constr.h
delete mode 100644 drivers/crypto/caam/pdb.h

diff --git a/drivers/crypto/caam/desc_constr.h b/drivers/crypto/caam/desc_constr.h
deleted file mode 100644
index c344fbce1c67..000000000000
--- a/drivers/crypto/caam/desc_constr.h
+++ /dev/null
@@ -1,384 +0,0 @@
-/*
- * caam descriptor construction helper functions
- *
- * Copyright 2008-2012 Freescale Semiconductor, Inc.
- */
-
-#include "flib/desc.h"
-
-#define IMMEDIATE (1 << 23)
-
-#ifdef DEBUG
-#define PRINT_POS do { printk(KERN_DEBUG "%02d: %s\n", desc_len(desc),\
- &__func__[sizeof("append")]); } while (0)
-#else
-#define PRINT_POS
-#endif
-
-#define SET_OK_NO_PROP_ERRORS (IMMEDIATE | LDST_CLASS_DECO | \
- LDST_SRCDST_WORD_DECOCTRL | \
- (LDOFF_CHG_SHARE_OK_NO_PROP << \
- LDST_OFFSET_SHIFT))
-#define DISABLE_AUTO_INFO_FIFO (IMMEDIATE | LDST_CLASS_DECO | \
- LDST_SRCDST_WORD_DECOCTRL | \
- (LDOFF_DISABLE_AUTO_NFIFO << LDST_OFFSET_SHIFT))
-#define ENABLE_AUTO_INFO_FIFO (IMMEDIATE | LDST_CLASS_DECO | \
- LDST_SRCDST_WORD_DECOCTRL | \
- (LDOFF_ENABLE_AUTO_NFIFO << LDST_OFFSET_SHIFT))
-
-static inline int desc_len(u32 *desc)
-{
- return *desc & HDR_DESCLEN_MASK;
-}
-
-static inline int desc_bytes(void *desc)
-{
- return desc_len(desc) * CAAM_CMD_SZ;
-}
-
-static inline u32 *desc_end(u32 *desc)
-{
- return desc + desc_len(desc);
-}
-
-static inline void *sh_desc_pdb(u32 *desc)
-{
- return desc + 1;
-}
-
-static inline void init_desc(u32 *desc, u32 options)
-{
- *desc = (options | HDR_ONE) + 1;
-}
-
-static inline void init_sh_desc(u32 *desc, u32 options)
-{
- PRINT_POS;
- init_desc(desc, CMD_SHARED_DESC_HDR | options);
-}
-
-static inline void init_sh_desc_pdb(u32 *desc, u32 options, size_t pdb_bytes)
-{
- u32 pdb_len = (pdb_bytes + CAAM_CMD_SZ - 1) / CAAM_CMD_SZ;
-
- init_sh_desc(desc, (((pdb_len + 1) << HDR_START_IDX_SHIFT) + pdb_len) |
- options);
-}
-
-static inline void init_job_desc(u32 *desc, u32 options)
-{
- init_desc(desc, CMD_DESC_HDR | options);
-}
-
-static inline void append_ptr(u32 *desc, dma_addr_t ptr)
-{
- dma_addr_t *offset = (dma_addr_t *)desc_end(desc);
-
- *offset = ptr;
-
- (*desc) += CAAM_PTR_SZ / CAAM_CMD_SZ;
-}
-
-static inline void init_job_desc_shared(u32 *desc, dma_addr_t ptr, int len,
- u32 options)
-{
- PRINT_POS;
- init_job_desc(desc, HDR_SHARED | options |
- (len << HDR_START_IDX_SHIFT));
- append_ptr(desc, ptr);
-}
-
-static inline void append_data(u32 *desc, void *data, int len)
-{
- u32 *offset = desc_end(desc);
-
- if (len) /* avoid sparse warning: memcpy with byte count of 0 */
- memcpy(offset, data, len);
-
- (*desc) += (len + CAAM_CMD_SZ - 1) / CAAM_CMD_SZ;
-}
-
-static inline void append_cmd(u32 *desc, u32 command)
-{
- u32 *cmd = desc_end(desc);
-
- *cmd = command;
-
- (*desc)++;
-}
-
-#define append_u32 append_cmd
-
-static inline void append_u64(u32 *desc, u64 data)
-{
- u32 *offset = desc_end(desc);
-
- *offset = upper_32_bits(data);
- *(++offset) = lower_32_bits(data);
-
- (*desc) += 2;
-}
-
-/* Write command without affecting header, and return pointer to next word */
-static inline u32 *write_cmd(u32 *desc, u32 command)
-{
- *desc = command;
-
- return desc + 1;
-}
-
-static inline void append_cmd_ptr(u32 *desc, dma_addr_t ptr, int len,
- u32 command)
-{
- append_cmd(desc, command | len);
- append_ptr(desc, ptr);
-}
-
-/* Write length after pointer, rather than inside command */
-static inline void append_cmd_ptr_extlen(u32 *desc, dma_addr_t ptr,
- unsigned int len, u32 command)
-{
- append_cmd(desc, command);
- if (!(command & (SQIN_RTO | SQIN_PRE)))
- append_ptr(desc, ptr);
- append_cmd(desc, len);
-}
-
-static inline void append_cmd_data(u32 *desc, void *data, int len,
- u32 command)
-{
- append_cmd(desc, command | IMMEDIATE | len);
- append_data(desc, data, len);
-}
-
-#define APPEND_CMD_RET(cmd, op) \
-static inline u32 *append_##cmd(u32 *desc, u32 options) \
-{ \
- u32 *cmd = desc_end(desc); \
- PRINT_POS; \
- append_cmd(desc, CMD_##op | options); \
- return cmd; \
-}
-APPEND_CMD_RET(jump, JUMP)
-APPEND_CMD_RET(move, MOVE)
-
-static inline void set_jump_tgt_here(u32 *desc, u32 *jump_cmd)
-{
- *jump_cmd = *jump_cmd | (desc_len(desc) - (jump_cmd - desc));
-}
-
-static inline void set_move_tgt_here(u32 *desc, u32 *move_cmd)
-{
- *move_cmd &= ~MOVE_OFFSET_MASK;
- *move_cmd = *move_cmd | ((desc_len(desc) << (MOVE_OFFSET_SHIFT + 2)) &
- MOVE_OFFSET_MASK);
-}
-
-#define APPEND_CMD(cmd, op) \
-static inline void append_##cmd(u32 *desc, u32 options) \
-{ \
- PRINT_POS; \
- append_cmd(desc, CMD_##op | options); \
-}
-APPEND_CMD(operation, OPERATION)
-
-#define APPEND_CMD_LEN(cmd, op) \
-static inline void append_##cmd(u32 *desc, unsigned int len, u32 options) \
-{ \
- PRINT_POS; \
- append_cmd(desc, CMD_##op | len | options); \
-}
-APPEND_CMD_LEN(seq_store, SEQ_STORE)
-APPEND_CMD_LEN(seq_fifo_load, SEQ_FIFO_LOAD)
-APPEND_CMD_LEN(seq_fifo_store, SEQ_FIFO_STORE)
-
-#define APPEND_CMD_PTR(cmd, op) \
-static inline void append_##cmd(u32 *desc, dma_addr_t ptr, unsigned int len, \
- u32 options) \
-{ \
- PRINT_POS; \
- append_cmd_ptr(desc, ptr, len, CMD_##op | options); \
-}
-APPEND_CMD_PTR(key, KEY)
-APPEND_CMD_PTR(load, LOAD)
-APPEND_CMD_PTR(fifo_load, FIFO_LOAD)
-APPEND_CMD_PTR(fifo_store, FIFO_STORE)
-
-static inline void append_store(u32 *desc, dma_addr_t ptr, unsigned int len,
- u32 options)
-{
- u32 cmd_src;
-
- cmd_src = options & LDST_SRCDST_MASK;
-
- append_cmd(desc, CMD_STORE | options | len);
-
- /* The following options do not require pointer */
- if (!(cmd_src == LDST_SRCDST_WORD_DESCBUF_SHARED ||
- cmd_src == LDST_SRCDST_WORD_DESCBUF_JOB ||
- cmd_src == LDST_SRCDST_WORD_DESCBUF_JOB_WE ||
- cmd_src == LDST_SRCDST_WORD_DESCBUF_SHARED_WE))
- append_ptr(desc, ptr);
-}
-
-#define APPEND_SEQ_PTR_INTLEN(cmd, op) \
-static inline void append_seq_##cmd##_ptr_intlen(u32 *desc, dma_addr_t ptr, \
- unsigned int len, \
- u32 options) \
-{ \
- PRINT_POS; \
- if (options & (SQIN_RTO | SQIN_PRE)) \
- append_cmd(desc, CMD_SEQ_##op##_PTR | len | options); \
- else \
- append_cmd_ptr(desc, ptr, len, CMD_SEQ_##op##_PTR | options); \
-}
-APPEND_SEQ_PTR_INTLEN(in, IN)
-APPEND_SEQ_PTR_INTLEN(out, OUT)
-
-#define APPEND_CMD_PTR_TO_IMM(cmd, op) \
-static inline void append_##cmd##_as_imm(u32 *desc, void *data, \
- unsigned int len, u32 options) \
-{ \
- PRINT_POS; \
- append_cmd_data(desc, data, len, CMD_##op | options); \
-}
-APPEND_CMD_PTR_TO_IMM(load, LOAD);
-APPEND_CMD_PTR_TO_IMM(fifo_load, FIFO_LOAD);
-
-#define APPEND_CMD_PTR_EXTLEN(cmd, op) \
-static inline void append_##cmd##_extlen(u32 *desc, dma_addr_t ptr, \
- unsigned int len, u32 options) \
-{ \
- PRINT_POS; \
- append_cmd_ptr_extlen(desc, ptr, len, CMD_##op | SQIN_EXT | options); \
-}
-APPEND_CMD_PTR_EXTLEN(seq_in_ptr, SEQ_IN_PTR)
-APPEND_CMD_PTR_EXTLEN(seq_out_ptr, SEQ_OUT_PTR)
-
-/*
- * Determine whether to store length internally or externally depending on
- * the size of its type
- */
-#define APPEND_CMD_PTR_LEN(cmd, op, type) \
-static inline void append_##cmd(u32 *desc, dma_addr_t ptr, \
- type len, u32 options) \
-{ \
- PRINT_POS; \
- if (sizeof(type) > sizeof(u16)) \
- append_##cmd##_extlen(desc, ptr, len, options); \
- else \
- append_##cmd##_intlen(desc, ptr, len, options); \
-}
-APPEND_CMD_PTR_LEN(seq_in_ptr, SEQ_IN_PTR, u32)
-APPEND_CMD_PTR_LEN(seq_out_ptr, SEQ_OUT_PTR, u32)
-
-/*
- * 2nd variant for commands whose specified immediate length differs
- * from length of immediate data provided, e.g., split keys
- */
-#define APPEND_CMD_PTR_TO_IMM2(cmd, op) \
-static inline void append_##cmd##_as_imm(u32 *desc, void *data, \
- unsigned int data_len, \
- unsigned int len, u32 options) \
-{ \
- PRINT_POS; \
- append_cmd(desc, CMD_##op | IMMEDIATE | len | options); \
- append_data(desc, data, data_len); \
-}
-APPEND_CMD_PTR_TO_IMM2(key, KEY);
-
-#define APPEND_CMD_RAW_IMM(cmd, op, type) \
-static inline void append_##cmd##_imm_##type(u32 *desc, type immediate, \
- u32 options) \
-{ \
- PRINT_POS; \
- append_cmd(desc, CMD_##op | IMMEDIATE | options | sizeof(type)); \
- append_cmd(desc, immediate); \
-}
-APPEND_CMD_RAW_IMM(load, LOAD, u32);
-
-/*
- * Append math command. Only the last part of destination and source need to
- * be specified
- */
-#define APPEND_MATH(op, desc, dest, src_0, src_1, len) \
-append_cmd(desc, CMD_MATH | MATH_FUN_##op | MATH_DEST_##dest | \
- MATH_SRC0_##src_0 | MATH_SRC1_##src_1 | (u32)len);
-
-#define append_math_add(desc, dest, src0, src1, len) \
- APPEND_MATH(ADD, desc, dest, src0, src1, len)
-#define append_math_sub(desc, dest, src0, src1, len) \
- APPEND_MATH(SUB, desc, dest, src0, src1, len)
-#define append_math_add_c(desc, dest, src0, src1, len) \
- APPEND_MATH(ADDC, desc, dest, src0, src1, len)
-#define append_math_sub_b(desc, dest, src0, src1, len) \
- APPEND_MATH(SUBB, desc, dest, src0, src1, len)
-#define append_math_and(desc, dest, src0, src1, len) \
- APPEND_MATH(AND, desc, dest, src0, src1, len)
-#define append_math_or(desc, dest, src0, src1, len) \
- APPEND_MATH(OR, desc, dest, src0, src1, len)
-#define append_math_xor(desc, dest, src0, src1, len) \
- APPEND_MATH(XOR, desc, dest, src0, src1, len)
-#define append_math_lshift(desc, dest, src0, src1, len) \
- APPEND_MATH(LSHIFT, desc, dest, src0, src1, len)
-#define append_math_rshift(desc, dest, src0, src1, len) \
- APPEND_MATH(RSHIFT, desc, dest, src0, src1, len)
-#define append_math_ldshift(desc, dest, src0, src1, len) \
- APPEND_MATH(SHLD, desc, dest, src0, src1, len)
-
-/* Exactly one source is IMM. Data is passed in as u32 value */
-#define APPEND_MATH_IMM_u32(op, desc, dest, src_0, src_1, data) \
-do { \
- APPEND_MATH(op, desc, dest, src_0, src_1, CAAM_CMD_SZ); \
- append_cmd(desc, data); \
-} while (0)
-
-#define append_math_add_imm_u32(desc, dest, src0, src1, data) \
- APPEND_MATH_IMM_u32(ADD, desc, dest, src0, src1, data)
-#define append_math_sub_imm_u32(desc, dest, src0, src1, data) \
- APPEND_MATH_IMM_u32(SUB, desc, dest, src0, src1, data)
-#define append_math_add_c_imm_u32(desc, dest, src0, src1, data) \
- APPEND_MATH_IMM_u32(ADDC, desc, dest, src0, src1, data)
-#define append_math_sub_b_imm_u32(desc, dest, src0, src1, data) \
- APPEND_MATH_IMM_u32(SUBB, desc, dest, src0, src1, data)
-#define append_math_and_imm_u32(desc, dest, src0, src1, data) \
- APPEND_MATH_IMM_u32(AND, desc, dest, src0, src1, data)
-#define append_math_or_imm_u32(desc, dest, src0, src1, data) \
- APPEND_MATH_IMM_u32(OR, desc, dest, src0, src1, data)
-#define append_math_xor_imm_u32(desc, dest, src0, src1, data) \
- APPEND_MATH_IMM_u32(XOR, desc, dest, src0, src1, data)
-#define append_math_lshift_imm_u32(desc, dest, src0, src1, data) \
- APPEND_MATH_IMM_u32(LSHIFT, desc, dest, src0, src1, data)
-#define append_math_rshift_imm_u32(desc, dest, src0, src1, data) \
- APPEND_MATH_IMM_u32(RSHIFT, desc, dest, src0, src1, data)
-
-/* Exactly one source is IMM. Data is passed in as u64 value */
-#define APPEND_MATH_IMM_u64(op, desc, dest, src_0, src_1, data) \
-do { \
- u32 upper = (data >> 16) >> 16; \
- APPEND_MATH(op, desc, dest, src_0, src_1, CAAM_CMD_SZ * 2 | \
- (upper ? 0 : MATH_IFB)); \
- if (upper) \
- append_u64(desc, data); \
- else \
- append_u32(desc, data); \
-} while (0)
-
-#define append_math_add_imm_u64(desc, dest, src0, src1, data) \
- APPEND_MATH_IMM_u64(ADD, desc, dest, src0, src1, data)
-#define append_math_sub_imm_u64(desc, dest, src0, src1, data) \
- APPEND_MATH_IMM_u64(SUB, desc, dest, src0, src1, data)
-#define append_math_add_c_imm_u64(desc, dest, src0, src1, data) \
- APPEND_MATH_IMM_u64(ADDC, desc, dest, src0, src1, data)
-#define append_math_sub_b_imm_u64(desc, dest, src0, src1, data) \
- APPEND_MATH_IMM_u64(SUBB, desc, dest, src0, src1, data)
-#define append_math_and_imm_u64(desc, dest, src0, src1, data) \
- APPEND_MATH_IMM_u64(AND, desc, dest, src0, src1, data)
-#define append_math_or_imm_u64(desc, dest, src0, src1, data) \
- APPEND_MATH_IMM_u64(OR, desc, dest, src0, src1, data)
-#define append_math_xor_imm_u64(desc, dest, src0, src1, data) \
- APPEND_MATH_IMM_u64(XOR, desc, dest, src0, src1, data)
-#define append_math_lshift_imm_u64(desc, dest, src0, src1, data) \
- APPEND_MATH_IMM_u64(LSHIFT, desc, dest, src0, src1, data)
-#define append_math_rshift_imm_u64(desc, dest, src0, src1, data) \
- APPEND_MATH_IMM_u64(RSHIFT, desc, dest, src0, src1, data)
diff --git a/drivers/crypto/caam/pdb.h b/drivers/crypto/caam/pdb.h
deleted file mode 100644
index 3a87c0cf879a..000000000000
--- a/drivers/crypto/caam/pdb.h
+++ /dev/null
@@ -1,402 +0,0 @@
-/*
- * CAAM Protocol Data Block (PDB) definition header file
- *
- * Copyright 2008-2012 Freescale Semiconductor, Inc.
- *
- */
-
-#ifndef CAAM_PDB_H
-#define CAAM_PDB_H
-
-/*
- * PDB- IPSec ESP Header Modification Options
- */
-#define PDBHMO_ESP_DECAP_SHIFT 12
-#define PDBHMO_ESP_ENCAP_SHIFT 4
-/*
- * Encap and Decap - Decrement TTL (Hop Limit) - Based on the value of the
- * Options Byte IP version (IPvsn) field:
- * if IPv4, decrement the inner IP header TTL field (byte 8);
- * if IPv6 decrement the inner IP header Hop Limit field (byte 7).
-*/
-#define PDBHMO_ESP_DECAP_DEC_TTL (0x02 << PDBHMO_ESP_DECAP_SHIFT)
-#define PDBHMO_ESP_ENCAP_DEC_TTL (0x02 << PDBHMO_ESP_ENCAP_SHIFT)
-/*
- * Decap - DiffServ Copy - Copy the IPv4 TOS or IPv6 Traffic Class byte
- * from the outer IP header to the inner IP header.
- */
-#define PDBHMO_ESP_DIFFSERV (0x01 << PDBHMO_ESP_DECAP_SHIFT)
-/*
- * Encap- Copy DF bit -if an IPv4 tunnel mode outer IP header is coming from
- * the PDB, copy the DF bit from the inner IP header to the outer IP header.
- */
-#define PDBHMO_ESP_DFBIT (0x04 << PDBHMO_ESP_ENCAP_SHIFT)
-
-/*
- * PDB - IPSec ESP Encap/Decap Options
- */
-#define PDBOPTS_ESP_ARSNONE 0x00 /* no antireplay window */
-#define PDBOPTS_ESP_ARS32 0x40 /* 32-entry antireplay window */
-#define PDBOPTS_ESP_ARS64 0xc0 /* 64-entry antireplay window */
-#define PDBOPTS_ESP_IVSRC 0x20 /* IV comes from internal random gen */
-#define PDBOPTS_ESP_ESN 0x10 /* extended sequence included */
-#define PDBOPTS_ESP_OUTFMT 0x08 /* output only decapsulation (decap) */
-#define PDBOPTS_ESP_IPHDRSRC 0x08 /* IP header comes from PDB (encap) */
-#define PDBOPTS_ESP_INCIPHDR 0x04 /* Prepend IP header to output frame */
-#define PDBOPTS_ESP_IPVSN 0x02 /* process IPv6 header */
-#define PDBOPTS_ESP_AOFL 0x04 /* adjust out frame len (decap, SEC>=5.3)*/
-#define PDBOPTS_ESP_TUNNEL 0x01 /* tunnel mode next-header byte */
-#define PDBOPTS_ESP_IPV6 0x02 /* ip header version is V6 */
-#define PDBOPTS_ESP_DIFFSERV 0x40 /* copy TOS/TC from inner iphdr */
-#define PDBOPTS_ESP_UPDATE_CSUM 0x80 /* encap-update ip header checksum */
-#define PDBOPTS_ESP_VERIFY_CSUM 0x20 /* decap-validate ip header checksum */
-
-/*
- * General IPSec encap/decap PDB definitions
- */
-struct ipsec_encap_cbc {
- u32 iv[4];
-};
-
-struct ipsec_encap_ctr {
- u32 ctr_nonce;
- u32 ctr_initial;
- u32 iv[2];
-};
-
-struct ipsec_encap_ccm {
- u32 salt; /* lower 24 bits */
- u8 b0_flags;
- u8 ctr_flags;
- u16 ctr_initial;
- u32 iv[2];
-};
-
-struct ipsec_encap_gcm {
- u32 salt; /* lower 24 bits */
- u32 rsvd1;
- u32 iv[2];
-};
-
-struct ipsec_encap_pdb {
- u8 hmo_rsvd;
- u8 ip_nh;
- u8 ip_nh_offset;
- u8 options;
- u32 seq_num_ext_hi;
- u32 seq_num;
- union {
- struct ipsec_encap_cbc cbc;
- struct ipsec_encap_ctr ctr;
- struct ipsec_encap_ccm ccm;
- struct ipsec_encap_gcm gcm;
- };
- u32 spi;
- u16 rsvd1;
- u16 ip_hdr_len;
- u32 ip_hdr[0]; /* optional IP Header content */
-};
-
-struct ipsec_decap_cbc {
- u32 rsvd[2];
-};
-
-struct ipsec_decap_ctr {
- u32 salt;
- u32 ctr_initial;
-};
-
-struct ipsec_decap_ccm {
- u32 salt;
- u8 iv_flags;
- u8 ctr_flags;
- u16 ctr_initial;
-};
-
-struct ipsec_decap_gcm {
- u32 salt;
- u32 resvd;
-};
-
-struct ipsec_decap_pdb {
- u16 hmo_ip_hdr_len;
- u8 ip_nh_offset;
- u8 options;
- union {
- struct ipsec_decap_cbc cbc;
- struct ipsec_decap_ctr ctr;
- struct ipsec_decap_ccm ccm;
- struct ipsec_decap_gcm gcm;
- };
- u32 seq_num_ext_hi;
- u32 seq_num;
- u32 anti_replay[2];
- u32 end_index[0];
-};
-
-/*
- * IPSec ESP Datapath Protocol Override Register (DPOVRD)
- */
-struct ipsec_deco_dpovrd {
-#define IPSEC_ENCAP_DECO_DPOVRD_USE 0x80
- u8 ovrd_ecn;
- u8 ip_hdr_len;
- u8 nh_offset;
- u8 next_header; /* reserved if decap */
-};
-
-/*
- * IEEE 802.11i WiFi Protocol Data Block
- */
-#define WIFI_PDBOPTS_FCS 0x01
-#define WIFI_PDBOPTS_AR 0x40
-
-struct wifi_encap_pdb {
- u16 mac_hdr_len;
- u8 rsvd;
- u8 options;
- u8 iv_flags;
- u8 pri;
- u16 pn1;
- u32 pn2;
- u16 frm_ctrl_mask;
- u16 seq_ctrl_mask;
- u8 rsvd1[2];
- u8 cnst;
- u8 key_id;
- u8 ctr_flags;
- u8 rsvd2;
- u16 ctr_init;
-};
-
-struct wifi_decap_pdb {
- u16 mac_hdr_len;
- u8 rsvd;
- u8 options;
- u8 iv_flags;
- u8 pri;
- u16 pn1;
- u32 pn2;
- u16 frm_ctrl_mask;
- u16 seq_ctrl_mask;
- u8 rsvd1[4];
- u8 ctr_flags;
- u8 rsvd2;
- u16 ctr_init;
-};
-
-/*
- * IEEE 802.16 WiMAX Protocol Data Block
- */
-#define WIMAX_PDBOPTS_FCS 0x01
-#define WIMAX_PDBOPTS_AR 0x40 /* decap only */
-
-struct wimax_encap_pdb {
- u8 rsvd[3];
- u8 options;
- u32 nonce;
- u8 b0_flags;
- u8 ctr_flags;
- u16 ctr_init;
- /* begin DECO writeback region */
- u32 pn;
- /* end DECO writeback region */
-};
-
-struct wimax_decap_pdb {
- u8 rsvd[3];
- u8 options;
- u32 nonce;
- u8 iv_flags;
- u8 ctr_flags;
- u16 ctr_init;
- /* begin DECO writeback region */
- u32 pn;
- u8 rsvd1[2];
- u16 antireplay_len;
- u64 antireplay_scorecard;
- /* end DECO writeback region */
-};
-
-/*
- * IEEE 801.AE MacSEC Protocol Data Block
- */
-#define MACSEC_PDBOPTS_FCS 0x01
-#define MACSEC_PDBOPTS_AR 0x40 /* used in decap only */
-
-struct macsec_encap_pdb {
- u16 aad_len;
- u8 rsvd;
- u8 options;
- u64 sci;
- u16 ethertype;
- u8 tci_an;
- u8 rsvd1;
- /* begin DECO writeback region */
- u32 pn;
- /* end DECO writeback region */
-};
-
-struct macsec_decap_pdb {
- u16 aad_len;
- u8 rsvd;
- u8 options;
- u64 sci;
- u8 rsvd1[3];
- /* begin DECO writeback region */
- u8 antireplay_len;
- u32 pn;
- u64 antireplay_scorecard;
- /* end DECO writeback region */
-};
-
-/*
- * SSL/TLS/DTLS Protocol Data Blocks
- */
-
-#define TLS_PDBOPTS_ARS32 0x40
-#define TLS_PDBOPTS_ARS64 0xc0
-#define TLS_PDBOPTS_OUTFMT 0x08
-#define TLS_PDBOPTS_IV_WRTBK 0x02 /* 1.1/1.2/DTLS only */
-#define TLS_PDBOPTS_EXP_RND_IV 0x01 /* 1.1/1.2/DTLS only */
-
-struct tls_block_encap_pdb {
- u8 type;
- u8 version[2];
- u8 options;
- u64 seq_num;
- u32 iv[4];
-};
-
-struct tls_stream_encap_pdb {
- u8 type;
- u8 version[2];
- u8 options;
- u64 seq_num;
- u8 i;
- u8 j;
- u8 rsvd1[2];
-};
-
-struct dtls_block_encap_pdb {
- u8 type;
- u8 version[2];
- u8 options;
- u16 epoch;
- u16 seq_num[3];
- u32 iv[4];
-};
-
-struct tls_block_decap_pdb {
- u8 rsvd[3];
- u8 options;
- u64 seq_num;
- u32 iv[4];
-};
-
-struct tls_stream_decap_pdb {
- u8 rsvd[3];
- u8 options;
- u64 seq_num;
- u8 i;
- u8 j;
- u8 rsvd1[2];
-};
-
-struct dtls_block_decap_pdb {
- u8 rsvd[3];
- u8 options;
- u16 epoch;
- u16 seq_num[3];
- u32 iv[4];
- u64 antireplay_scorecard;
-};
-
-/*
- * SRTP Protocol Data Blocks
- */
-#define SRTP_PDBOPTS_MKI 0x08
-#define SRTP_PDBOPTS_AR 0x40
-
-struct srtp_encap_pdb {
- u8 x_len;
- u8 mki_len;
- u8 n_tag;
- u8 options;
- u32 cnst0;
- u8 rsvd[2];
- u16 cnst1;
- u16 salt[7];
- u16 cnst2;
- u32 rsvd1;
- u32 roc;
- u32 opt_mki;
-};
-
-struct srtp_decap_pdb {
- u8 x_len;
- u8 mki_len;
- u8 n_tag;
- u8 options;
- u32 cnst0;
- u8 rsvd[2];
- u16 cnst1;
- u16 salt[7];
- u16 cnst2;
- u16 rsvd1;
- u16 seq_num;
- u32 roc;
- u64 antireplay_scorecard;
-};
-
-/*
- * DSA/ECDSA Protocol Data Blocks
- * Two of these exist: DSA-SIGN, and DSA-VERIFY. They are similar
- * except for the treatment of "w" for verify, "s" for sign,
- * and the placement of "a,b".
- */
-#define DSA_PDB_SGF_SHIFT 24
-#define DSA_PDB_SGF_MASK (0xff << DSA_PDB_SGF_SHIFT)
-#define DSA_PDB_SGF_Q (0x80 << DSA_PDB_SGF_SHIFT)
-#define DSA_PDB_SGF_R (0x40 << DSA_PDB_SGF_SHIFT)
-#define DSA_PDB_SGF_G (0x20 << DSA_PDB_SGF_SHIFT)
-#define DSA_PDB_SGF_W (0x10 << DSA_PDB_SGF_SHIFT)
-#define DSA_PDB_SGF_S (0x10 << DSA_PDB_SGF_SHIFT)
-#define DSA_PDB_SGF_F (0x08 << DSA_PDB_SGF_SHIFT)
-#define DSA_PDB_SGF_C (0x04 << DSA_PDB_SGF_SHIFT)
-#define DSA_PDB_SGF_D (0x02 << DSA_PDB_SGF_SHIFT)
-#define DSA_PDB_SGF_AB_SIGN (0x02 << DSA_PDB_SGF_SHIFT)
-#define DSA_PDB_SGF_AB_VERIFY (0x01 << DSA_PDB_SGF_SHIFT)
-
-#define DSA_PDB_L_SHIFT 7
-#define DSA_PDB_L_MASK (0x3ff << DSA_PDB_L_SHIFT)
-
-#define DSA_PDB_N_MASK 0x7f
-
-struct dsa_sign_pdb {
- u32 sgf_ln; /* Use DSA_PDB_ defintions per above */
- u8 *q;
- u8 *r;
- u8 *g; /* or Gx,y */
- u8 *s;
- u8 *f;
- u8 *c;
- u8 *d;
- u8 *ab; /* ECC only */
- u8 *u;
-};
-
-struct dsa_verify_pdb {
- u32 sgf_ln;
- u8 *q;
- u8 *r;
- u8 *g; /* or Gx,y */
- u8 *w; /* or Wx,y */
- u8 *f;
- u8 *c;
- u8 *d;
- u8 *tmp; /* temporary data block */
- u8 *ab; /* only used if ECC processing */
-};
-
-#endif
--
1.8.3.1
Horia Geanta
2014-08-14 12:54:30 UTC
Permalink
Descriptors rewritten using RTA were tested to be bit-exact
(i.e. exact hex dump) with the ones being replaced, with
the following exceptions:
-shared descriptors - start index is 1 instead of 0; this has
no functional effect
-MDHA split keys are different - since the keys are the pre-computed
IPAD | OPAD HMAC keys encrypted with JDKEK (Job Descriptor
Key-Encryption Key); JDKEK changes at device POR.

Signed-off-by: Horia Geanta <***@freescale.com>
---
drivers/crypto/caam/caamalg.c | 668 +++++++++++++++++++++--------------------
drivers/crypto/caam/caamhash.c | 389 ++++++++++++++----------
drivers/crypto/caam/caamrng.c | 41 ++-
drivers/crypto/caam/ctrl.c | 83 +++--
drivers/crypto/caam/ctrl.h | 2 +-
drivers/crypto/caam/key_gen.c | 35 +--
drivers/crypto/caam/key_gen.h | 5 +-
7 files changed, 680 insertions(+), 543 deletions(-)

diff --git a/drivers/crypto/caam/caamalg.c b/drivers/crypto/caam/caamalg.c
index c3a845856cd0..cd1ba573c633 100644
--- a/drivers/crypto/caam/caamalg.c
+++ b/drivers/crypto/caam/caamalg.c
@@ -48,7 +48,8 @@

#include "regs.h"
#include "intern.h"
-#include "desc_constr.h"
+#include "flib/rta.h"
+#include "flib/desc/common.h"
#include "jr.h"
#include "error.h"
#include "sg_sw_sec4.h"
@@ -91,61 +92,57 @@
#define debug(format, arg...)
#endif
static struct list_head alg_list;
+static const bool ps = (sizeof(dma_addr_t) == sizeof(u64));

/* Set DK bit in class 1 operation if shared */
-static inline void append_dec_op1(u32 *desc, u32 type)
+static inline void append_dec_op1(struct program *p, u32 type)
{
- u32 *jump_cmd, *uncond_jump_cmd;
+ LABEL(jump_cmd);
+ REFERENCE(pjump_cmd);
+ LABEL(uncond_jump_cmd);
+ REFERENCE(puncond_jump_cmd);

/* DK bit is valid only for AES */
if ((type & OP_ALG_ALGSEL_MASK) != OP_ALG_ALGSEL_AES) {
- append_operation(desc, type | OP_ALG_AS_INITFINAL |
- OP_ALG_DECRYPT);
+ ALG_OPERATION(p, type & OP_ALG_ALGSEL_MASK,
+ type & OP_ALG_AAI_MASK, OP_ALG_AS_INITFINAL,
+ ICV_CHECK_DISABLE, DIR_DEC);
return;
}

- jump_cmd = append_jump(desc, JUMP_TEST_ALL | JUMP_COND_SHRD);
- append_operation(desc, type | OP_ALG_AS_INITFINAL |
- OP_ALG_DECRYPT);
- uncond_jump_cmd = append_jump(desc, JUMP_TEST_ALL);
- set_jump_tgt_here(desc, jump_cmd);
- append_operation(desc, type | OP_ALG_AS_INITFINAL |
- OP_ALG_DECRYPT | OP_ALG_AAI_DK);
- set_jump_tgt_here(desc, uncond_jump_cmd);
-}
-
-/*
- * For aead functions, read payload and write payload,
- * both of which are specified in req->src and req->dst
- */
-static inline void aead_append_src_dst(u32 *desc, u32 msg_type)
-{
- append_seq_fifo_store(desc, 0, FIFOST_TYPE_MESSAGE_DATA | KEY_VLF);
- append_seq_fifo_load(desc, 0, FIFOLD_CLASS_BOTH |
- KEY_VLF | msg_type | FIFOLD_TYPE_LASTBOTH);
+ pjump_cmd = JUMP(p, jump_cmd, LOCAL_JUMP, ALL_TRUE, SHRD);
+ ALG_OPERATION(p, type & OP_ALG_ALGSEL_MASK, type & OP_ALG_AAI_MASK,
+ OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, DIR_DEC);
+ puncond_jump_cmd = JUMP(p, uncond_jump_cmd, LOCAL_JUMP, ALL_TRUE, 0);
+ SET_LABEL(p, jump_cmd);
+ ALG_OPERATION(p, type & OP_ALG_ALGSEL_MASK,
+ (type & OP_ALG_AAI_MASK) | OP_ALG_AAI_DK,
+ OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, DIR_DEC);
+ SET_LABEL(p, uncond_jump_cmd);
+
+ PATCH_JUMP(p, pjump_cmd, jump_cmd);
+ PATCH_JUMP(p, puncond_jump_cmd, uncond_jump_cmd);
}

/*
* For aead encrypt and decrypt, read iv for both classes
*/
-static inline void aead_append_ld_iv(u32 *desc, int ivsize)
+static inline void aead_append_ld_iv(struct program *p, u32 ivsize)
{
- append_cmd(desc, CMD_SEQ_LOAD | LDST_SRCDST_BYTE_CONTEXT |
- LDST_CLASS_1_CCB | ivsize);
- append_move(desc, MOVE_SRC_CLASS1CTX | MOVE_DEST_CLASS2INFIFO | ivsize);
+ SEQLOAD(p, CONTEXT1, 0, ivsize, 0);
+ MOVE(p, CONTEXT1, 0, IFIFOAB2, 0, ivsize, IMMED);
}

/*
* For ablkcipher encrypt and decrypt, read from req->src and
* write to req->dst
*/
-static inline void ablkcipher_append_src_dst(u32 *desc)
+static inline void ablkcipher_append_src_dst(struct program *p)
{
- append_math_add(desc, VARSEQOUTLEN, SEQINLEN, REG0, CAAM_CMD_SZ);
- append_math_add(desc, VARSEQINLEN, SEQINLEN, REG0, CAAM_CMD_SZ);
- append_seq_fifo_load(desc, 0, FIFOLD_CLASS_CLASS1 |
- KEY_VLF | FIFOLD_TYPE_MSG | FIFOLD_TYPE_LAST1);
- append_seq_fifo_store(desc, 0, FIFOST_TYPE_MESSAGE_DATA | KEY_VLF);
+ MATHB(p, SEQINSZ, ADD, MATH0, VSEQOUTSZ, 4, 0);
+ MATHB(p, SEQINSZ, ADD, MATH0, VSEQINSZ, 4, 0);
+ SEQFIFOLOAD(p, MSG1, 0, VLF | LAST1);
+ SEQFIFOSTORE(p, MSG, 0, 0, VLF);
}

/*
@@ -168,7 +165,6 @@ struct caam_ctx {
dma_addr_t sh_desc_givenc_dma;
u32 class1_alg_type;
u32 class2_alg_type;
- u32 alg_op;
u8 key[CAAM_MAX_KEY_SIZE];
dma_addr_t key_dma;
unsigned int enckeylen;
@@ -177,38 +173,37 @@ struct caam_ctx {
unsigned int authsize;
};

-static void append_key_aead(u32 *desc, struct caam_ctx *ctx,
+static void append_key_aead(struct program *p, struct caam_ctx *ctx,
int keys_fit_inline)
{
if (keys_fit_inline) {
- append_key_as_imm(desc, ctx->key, ctx->split_key_pad_len,
- ctx->split_key_len, CLASS_2 |
- KEY_DEST_MDHA_SPLIT | KEY_ENC);
- append_key_as_imm(desc, (void *)ctx->key +
- ctx->split_key_pad_len, ctx->enckeylen,
- ctx->enckeylen, CLASS_1 | KEY_DEST_CLASS_REG);
+ KEY(p, MDHA_SPLIT_KEY, ENC, (uintptr_t)ctx->key,
+ ctx->split_key_len, IMMED | COPY);
+ KEY(p, KEY1, 0, (uintptr_t)(ctx->key + ctx->split_key_pad_len),
+ ctx->enckeylen, IMMED | COPY);
} else {
- append_key(desc, ctx->key_dma, ctx->split_key_len, CLASS_2 |
- KEY_DEST_MDHA_SPLIT | KEY_ENC);
- append_key(desc, ctx->key_dma + ctx->split_key_pad_len,
- ctx->enckeylen, CLASS_1 | KEY_DEST_CLASS_REG);
+ KEY(p, MDHA_SPLIT_KEY, ENC, ctx->key_dma, ctx->split_key_len,
+ 0);
+ KEY(p, KEY1, 0, ctx->key_dma + ctx->split_key_pad_len,
+ ctx->enckeylen, 0);
}
}

-static void init_sh_desc_key_aead(u32 *desc, struct caam_ctx *ctx,
+static void init_sh_desc_key_aead(struct program *p, struct caam_ctx *ctx,
int keys_fit_inline)
{
- u32 *key_jump_cmd;
+ LABEL(key_jump_cmd);
+ REFERENCE(pkey_jump_cmd);

- init_sh_desc(desc, HDR_SHARE_SERIAL);
+ SHR_HDR(p, SHR_SERIAL, 1, 0);

/* Skip if already shared */
- key_jump_cmd = append_jump(desc, JUMP_JSL | JUMP_TEST_ALL |
- JUMP_COND_SHRD);
+ pkey_jump_cmd = JUMP(p, key_jump_cmd, LOCAL_JUMP, ALL_TRUE, SHRD);

- append_key_aead(desc, ctx, keys_fit_inline);
+ append_key_aead(p, ctx, keys_fit_inline);

- set_jump_tgt_here(desc, key_jump_cmd);
+ SET_LABEL(p, key_jump_cmd);
+ PATCH_JUMP(p, pkey_jump_cmd, key_jump_cmd);
}

static int aead_null_set_sh_desc(struct crypto_aead *aead)
@@ -217,8 +212,18 @@ static int aead_null_set_sh_desc(struct crypto_aead *aead)
struct caam_ctx *ctx = crypto_aead_ctx(aead);
struct device *jrdev = ctx->jrdev;
bool keys_fit_inline = false;
- u32 *key_jump_cmd, *jump_cmd, *read_move_cmd, *write_move_cmd;
u32 *desc;
+ struct program prg;
+ struct program *p = &prg;
+ unsigned desc_bytes;
+ LABEL(skip_key_load);
+ REFERENCE(pskip_key_load);
+ LABEL(nop_cmd);
+ REFERENCE(pnop_cmd);
+ LABEL(read_move_cmd);
+ REFERENCE(pread_move_cmd);
+ LABEL(write_move_cmd);
+ REFERENCE(pwrite_move_cmd);

/*
* Job Descriptor and Shared Descriptors
@@ -230,70 +235,71 @@ static int aead_null_set_sh_desc(struct crypto_aead *aead)

/* aead_encrypt shared descriptor */
desc = ctx->sh_desc_enc;
+ PROGRAM_CNTXT_INIT(p, desc, 0);
+ if (ps)
+ PROGRAM_SET_36BIT_ADDR(p);

- init_sh_desc(desc, HDR_SHARE_SERIAL);
+ SHR_HDR(p, SHR_SERIAL, 1, 0);

/* Skip if already shared */
- key_jump_cmd = append_jump(desc, JUMP_JSL | JUMP_TEST_ALL |
- JUMP_COND_SHRD);
+ pskip_key_load = JUMP(p, skip_key_load, LOCAL_JUMP, ALL_TRUE, SHRD);
if (keys_fit_inline)
- append_key_as_imm(desc, ctx->key, ctx->split_key_pad_len,
- ctx->split_key_len, CLASS_2 |
- KEY_DEST_MDHA_SPLIT | KEY_ENC);
+ KEY(p, MDHA_SPLIT_KEY, ENC, (uintptr_t)ctx->key,
+ ctx->split_key_len, IMMED | COPY);
else
- append_key(desc, ctx->key_dma, ctx->split_key_len, CLASS_2 |
- KEY_DEST_MDHA_SPLIT | KEY_ENC);
- set_jump_tgt_here(desc, key_jump_cmd);
+ KEY(p, MDHA_SPLIT_KEY, ENC, ctx->key_dma, ctx->split_key_len,
+ 0);
+ SET_LABEL(p, skip_key_load);

/* cryptlen = seqoutlen - authsize */
- append_math_sub_imm_u32(desc, REG3, SEQOUTLEN, IMM, ctx->authsize);
+ MATHB(p, SEQOUTSZ, SUB, ctx->authsize, MATH3, CAAM_CMD_SZ, IMMED2);

/*
* NULL encryption; IV is zero
* assoclen = (assoclen + cryptlen) - cryptlen
*/
- append_math_sub(desc, VARSEQINLEN, SEQINLEN, REG3, CAAM_CMD_SZ);
+ MATHB(p, SEQINSZ, SUB, MATH3, VSEQINSZ, CAAM_CMD_SZ, 0);

/* read assoc before reading payload */
- append_seq_fifo_load(desc, 0, FIFOLD_CLASS_CLASS2 | FIFOLD_TYPE_MSG |
- KEY_VLF);
+ SEQFIFOLOAD(p, MSG2, 0 , VLF);

/* Prepare to read and write cryptlen bytes */
- append_math_add(desc, VARSEQINLEN, ZERO, REG3, CAAM_CMD_SZ);
- append_math_add(desc, VARSEQOUTLEN, ZERO, REG3, CAAM_CMD_SZ);
+ MATHB(p, ZERO, ADD, MATH3, VSEQINSZ, CAAM_CMD_SZ, 0);
+ MATHB(p, ZERO, ADD, MATH3, VSEQOUTSZ, CAAM_CMD_SZ, 0);

/*
* MOVE_LEN opcode is not available in all SEC HW revisions,
* thus need to do some magic, i.e. self-patch the descriptor
* buffer.
*/
- read_move_cmd = append_move(desc, MOVE_SRC_DESCBUF |
- MOVE_DEST_MATH3 |
- (0x6 << MOVE_LEN_SHIFT));
- write_move_cmd = append_move(desc, MOVE_SRC_MATH3 |
- MOVE_DEST_DESCBUF |
- MOVE_WAITCOMP |
- (0x8 << MOVE_LEN_SHIFT));
+ pread_move_cmd = MOVE(p, DESCBUF, 0, MATH3, 0, 6, IMMED);
+ pwrite_move_cmd = MOVE(p, MATH3, 0, DESCBUF, 0, 8, WAITCOMP | IMMED);

/* Class 2 operation */
- append_operation(desc, ctx->class2_alg_type |
- OP_ALG_AS_INITFINAL | OP_ALG_ENCRYPT);
+ ALG_OPERATION(p, ctx->class2_alg_type & OP_ALG_ALGSEL_MASK,
+ ctx->class2_alg_type & OP_ALG_AAI_MASK,
+ OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, DIR_ENC);

/* Read and write cryptlen bytes */
- aead_append_src_dst(desc, FIFOLD_TYPE_MSG | FIFOLD_TYPE_FLUSH1);
+ SEQFIFOSTORE(p, MSG, 0, 0, VLF);
+ SEQFIFOLOAD(p, MSGINSNOOP, 0, VLF | LAST1 | LAST2 | FLUSH1);

- set_move_tgt_here(desc, read_move_cmd);
- set_move_tgt_here(desc, write_move_cmd);
- append_cmd(desc, CMD_LOAD | DISABLE_AUTO_INFO_FIFO);
- append_move(desc, MOVE_SRC_INFIFO_CL | MOVE_DEST_OUTFIFO |
- MOVE_AUX_LS);
+ SET_LABEL(p, read_move_cmd);
+ SET_LABEL(p, write_move_cmd);
+ LOAD(p, 0, DCTRL, LDOFF_DISABLE_AUTO_NFIFO, 0, IMMED);
+ MOVE(p, IFIFOAB1, 0, OFIFO, 0, 0, IMMED);

/* Write ICV */
- append_seq_store(desc, ctx->authsize, LDST_CLASS_2_CCB |
- LDST_SRCDST_BYTE_CONTEXT);
+ SEQSTORE(p, CONTEXT2, 0, ctx->authsize, 0);
+
+ PATCH_JUMP(p, pskip_key_load, skip_key_load);
+ PATCH_MOVE(p, pread_move_cmd, read_move_cmd);
+ PATCH_MOVE(p, pwrite_move_cmd, write_move_cmd);

- ctx->sh_desc_enc_dma = dma_map_single(jrdev, desc,
- desc_bytes(desc),
+ PROGRAM_FINALIZE(p);
+
+ desc_bytes = DESC_BYTES(desc);
+ ctx->sh_desc_enc_dma = dma_map_single(jrdev, desc, desc_bytes,
DMA_TO_DEVICE);
if (dma_mapping_error(jrdev, ctx->sh_desc_enc_dma)) {
dev_err(jrdev, "unable to map shared descriptor\n");
@@ -302,8 +308,7 @@ static int aead_null_set_sh_desc(struct crypto_aead *aead)
#ifdef DEBUG
print_hex_dump(KERN_ERR,
"aead null enc shdesc@"__stringify(__LINE__)": ",
- DUMP_PREFIX_ADDRESS, 16, 4, desc,
- desc_bytes(desc), 1);
+ DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes, 1);
#endif

/*
@@ -315,78 +320,80 @@ static int aead_null_set_sh_desc(struct crypto_aead *aead)
ctx->split_key_pad_len <= CAAM_DESC_BYTES_MAX)
keys_fit_inline = true;

+ /* aead_decrypt shared descriptor */
desc = ctx->sh_desc_dec;
+ PROGRAM_CNTXT_INIT(p, desc, 0);
+ if (ps)
+ PROGRAM_SET_36BIT_ADDR(p);

- /* aead_decrypt shared descriptor */
- init_sh_desc(desc, HDR_SHARE_SERIAL);
+ SHR_HDR(p, SHR_SERIAL, 1, 0);

/* Skip if already shared */
- key_jump_cmd = append_jump(desc, JUMP_JSL | JUMP_TEST_ALL |
- JUMP_COND_SHRD);
+ pskip_key_load = JUMP(p, skip_key_load, LOCAL_JUMP, ALL_TRUE, SHRD);
if (keys_fit_inline)
- append_key_as_imm(desc, ctx->key, ctx->split_key_pad_len,
- ctx->split_key_len, CLASS_2 |
- KEY_DEST_MDHA_SPLIT | KEY_ENC);
+ KEY(p, MDHA_SPLIT_KEY, ENC, (uintptr_t)ctx->key,
+ ctx->split_key_len, IMMED | COPY);
else
- append_key(desc, ctx->key_dma, ctx->split_key_len, CLASS_2 |
- KEY_DEST_MDHA_SPLIT | KEY_ENC);
- set_jump_tgt_here(desc, key_jump_cmd);
+ KEY(p, MDHA_SPLIT_KEY, ENC, ctx->key_dma, ctx->split_key_len,
+ 0);
+ SET_LABEL(p, skip_key_load);

/* Class 2 operation */
- append_operation(desc, ctx->class2_alg_type |
- OP_ALG_AS_INITFINAL | OP_ALG_DECRYPT | OP_ALG_ICV_ON);
+ ALG_OPERATION(p, ctx->class2_alg_type & OP_ALG_ALGSEL_MASK,
+ ctx->class2_alg_type & OP_ALG_AAI_MASK,
+ OP_ALG_AS_INITFINAL, ICV_CHECK_ENABLE, DIR_DEC);

/* assoclen + cryptlen = seqinlen - ivsize - authsize */
- append_math_sub_imm_u32(desc, REG3, SEQINLEN, IMM,
- ctx->authsize + tfm->ivsize);
+ MATHB(p, SEQINSZ, SUB, ctx->authsize + tfm->ivsize, MATH3, CAAM_CMD_SZ,
+ IMMED2);
/* assoclen = (assoclen + cryptlen) - cryptlen */
- append_math_sub(desc, REG2, SEQOUTLEN, REG0, CAAM_CMD_SZ);
- append_math_sub(desc, VARSEQINLEN, REG3, REG2, CAAM_CMD_SZ);
+ MATHB(p, SEQOUTSZ, SUB, MATH0, MATH2, CAAM_CMD_SZ, 0);
+ MATHB(p, MATH3, SUB, MATH2, VSEQINSZ, CAAM_CMD_SZ, 0);

/* read assoc before reading payload */
- append_seq_fifo_load(desc, 0, FIFOLD_CLASS_CLASS2 | FIFOLD_TYPE_MSG |
- KEY_VLF);
+ SEQFIFOLOAD(p, MSG2, 0 , VLF);

/* Prepare to read and write cryptlen bytes */
- append_math_add(desc, VARSEQINLEN, ZERO, REG2, CAAM_CMD_SZ);
- append_math_add(desc, VARSEQOUTLEN, ZERO, REG2, CAAM_CMD_SZ);
+ MATHB(p, ZERO, ADD, MATH2, VSEQINSZ, CAAM_CMD_SZ, 0);
+ MATHB(p, ZERO, ADD, MATH2, VSEQOUTSZ, CAAM_CMD_SZ, 0);

/*
* MOVE_LEN opcode is not available in all SEC HW revisions,
* thus need to do some magic, i.e. self-patch the descriptor
* buffer.
*/
- read_move_cmd = append_move(desc, MOVE_SRC_DESCBUF |
- MOVE_DEST_MATH2 |
- (0x6 << MOVE_LEN_SHIFT));
- write_move_cmd = append_move(desc, MOVE_SRC_MATH2 |
- MOVE_DEST_DESCBUF |
- MOVE_WAITCOMP |
- (0x8 << MOVE_LEN_SHIFT));
+ pread_move_cmd = MOVE(p, DESCBUF, 0, MATH2, 0, 6, IMMED);
+ pwrite_move_cmd = MOVE(p, MATH2, 0, DESCBUF, 0, 8, WAITCOMP | IMMED);

/* Read and write cryptlen bytes */
- aead_append_src_dst(desc, FIFOLD_TYPE_MSG | FIFOLD_TYPE_FLUSH1);
+ SEQFIFOSTORE(p, MSG, 0, 0, VLF);
+ SEQFIFOLOAD(p, MSGINSNOOP, 0, VLF | LAST1 | LAST2 | FLUSH1);

/*
* Insert a NOP here, since we need at least 4 instructions between
* code patching the descriptor buffer and the location being patched.
*/
- jump_cmd = append_jump(desc, JUMP_TEST_ALL);
- set_jump_tgt_here(desc, jump_cmd);
+ pnop_cmd = JUMP(p, nop_cmd, LOCAL_JUMP, ALL_TRUE, 0);
+ SET_LABEL(p, nop_cmd);

- set_move_tgt_here(desc, read_move_cmd);
- set_move_tgt_here(desc, write_move_cmd);
- append_cmd(desc, CMD_LOAD | DISABLE_AUTO_INFO_FIFO);
- append_move(desc, MOVE_SRC_INFIFO_CL | MOVE_DEST_OUTFIFO |
- MOVE_AUX_LS);
- append_cmd(desc, CMD_LOAD | ENABLE_AUTO_INFO_FIFO);
+ SET_LABEL(p, read_move_cmd);
+ SET_LABEL(p, write_move_cmd);
+ LOAD(p, 0, DCTRL, LDOFF_DISABLE_AUTO_NFIFO, 0, IMMED);
+ MOVE(p, IFIFOAB1, 0, OFIFO, 0, 0, IMMED);
+ LOAD(p, 0, DCTRL, LDOFF_ENABLE_AUTO_NFIFO, 0, IMMED);

/* Load ICV */
- append_seq_fifo_load(desc, ctx->authsize, FIFOLD_CLASS_CLASS2 |
- FIFOLD_TYPE_LAST2 | FIFOLD_TYPE_ICV);
+ SEQFIFOLOAD(p, ICV2, ctx->authsize, LAST2);
+
+ PATCH_JUMP(p, pskip_key_load, skip_key_load);
+ PATCH_JUMP(p, pnop_cmd, nop_cmd);
+ PATCH_MOVE(p, pread_move_cmd, read_move_cmd);
+ PATCH_MOVE(p, pwrite_move_cmd, write_move_cmd);

- ctx->sh_desc_dec_dma = dma_map_single(jrdev, desc,
- desc_bytes(desc),
+ PROGRAM_FINALIZE(p);
+
+ desc_bytes = DESC_BYTES(desc);
+ ctx->sh_desc_dec_dma = dma_map_single(jrdev, desc, desc_bytes,
DMA_TO_DEVICE);
if (dma_mapping_error(jrdev, ctx->sh_desc_dec_dma)) {
dev_err(jrdev, "unable to map shared descriptor\n");
@@ -395,8 +402,7 @@ static int aead_null_set_sh_desc(struct crypto_aead *aead)
#ifdef DEBUG
print_hex_dump(KERN_ERR,
"aead null dec shdesc@"__stringify(__LINE__)": ",
- DUMP_PREFIX_ADDRESS, 16, 4, desc,
- desc_bytes(desc), 1);
+ DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes, 1);
#endif

return 0;
@@ -410,6 +416,9 @@ static int aead_set_sh_desc(struct crypto_aead *aead)
bool keys_fit_inline = false;
u32 geniv, moveiv;
u32 *desc;
+ struct program prg;
+ struct program *p = &prg;
+ unsigned desc_bytes;

if (!ctx->authsize)
return 0;
@@ -429,42 +438,50 @@ static int aead_set_sh_desc(struct crypto_aead *aead)

/* aead_encrypt shared descriptor */
desc = ctx->sh_desc_enc;
+ PROGRAM_CNTXT_INIT(p, desc, 0);
+ if (ps)
+ PROGRAM_SET_36BIT_ADDR(p);

- init_sh_desc_key_aead(desc, ctx, keys_fit_inline);
+ init_sh_desc_key_aead(p, ctx, keys_fit_inline);

/* Class 2 operation */
- append_operation(desc, ctx->class2_alg_type |
- OP_ALG_AS_INITFINAL | OP_ALG_ENCRYPT);
+ ALG_OPERATION(p, ctx->class2_alg_type & OP_ALG_ALGSEL_MASK,
+ ctx->class2_alg_type & OP_ALG_AAI_MASK,
+ OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, DIR_ENC);

/* cryptlen = seqoutlen - authsize */
- append_math_sub_imm_u32(desc, REG3, SEQOUTLEN, IMM, ctx->authsize);
+ MATHB(p, SEQOUTSZ, SUB, ctx->authsize, MATH3, CAAM_CMD_SZ, IMMED2);

/* assoclen + cryptlen = seqinlen - ivsize */
- append_math_sub_imm_u32(desc, REG2, SEQINLEN, IMM, tfm->ivsize);
+ MATHB(p, SEQINSZ, SUB, tfm->ivsize, MATH2, CAAM_CMD_SZ, IMMED2);

/* assoclen = (assoclen + cryptlen) - cryptlen */
- append_math_sub(desc, VARSEQINLEN, REG2, REG3, CAAM_CMD_SZ);
+ MATHB(p, MATH2, SUB, MATH3, VSEQINSZ, CAAM_CMD_SZ, 0);

/* read assoc before reading payload */
- append_seq_fifo_load(desc, 0, FIFOLD_CLASS_CLASS2 | FIFOLD_TYPE_MSG |
- KEY_VLF);
- aead_append_ld_iv(desc, tfm->ivsize);
+ SEQFIFOLOAD(p, MSG2, 0 , VLF);
+ aead_append_ld_iv(p, tfm->ivsize);

/* Class 1 operation */
- append_operation(desc, ctx->class1_alg_type |
- OP_ALG_AS_INITFINAL | OP_ALG_ENCRYPT);
+ ALG_OPERATION(p, ctx->class1_alg_type & OP_ALG_ALGSEL_MASK,
+ ctx->class1_alg_type & OP_ALG_AAI_MASK,
+ OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, DIR_ENC);

/* Read and write cryptlen bytes */
- append_math_add(desc, VARSEQINLEN, ZERO, REG3, CAAM_CMD_SZ);
- append_math_add(desc, VARSEQOUTLEN, ZERO, REG3, CAAM_CMD_SZ);
- aead_append_src_dst(desc, FIFOLD_TYPE_MSG1OUT2);
+ MATHB(p, ZERO, ADD, MATH3, VSEQINSZ, CAAM_CMD_SZ, 0);
+ MATHB(p, ZERO, ADD, MATH3, VSEQOUTSZ, CAAM_CMD_SZ, 0);
+
+ /* Read and write payload */
+ SEQFIFOSTORE(p, MSG, 0, 0, VLF);
+ SEQFIFOLOAD(p, MSGOUTSNOOP, 0, VLF | LAST1 | LAST2);

/* Write ICV */
- append_seq_store(desc, ctx->authsize, LDST_CLASS_2_CCB |
- LDST_SRCDST_BYTE_CONTEXT);
+ SEQSTORE(p, CONTEXT2, 0, ctx->authsize, 0);

- ctx->sh_desc_enc_dma = dma_map_single(jrdev, desc,
- desc_bytes(desc),
+ PROGRAM_FINALIZE(p);
+
+ desc_bytes = DESC_BYTES(desc);
+ ctx->sh_desc_enc_dma = dma_map_single(jrdev, desc, desc_bytes,
DMA_TO_DEVICE);
if (dma_mapping_error(jrdev, ctx->sh_desc_enc_dma)) {
dev_err(jrdev, "unable to map shared descriptor\n");
@@ -472,8 +489,7 @@ static int aead_set_sh_desc(struct crypto_aead *aead)
}
#ifdef DEBUG
print_hex_dump(KERN_ERR, "aead enc shdesc@"__stringify(__LINE__)": ",
- DUMP_PREFIX_ADDRESS, 16, 4, desc,
- desc_bytes(desc), 1);
+ DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes, 1);
#endif

/*
@@ -488,39 +504,46 @@ static int aead_set_sh_desc(struct crypto_aead *aead)

/* aead_decrypt shared descriptor */
desc = ctx->sh_desc_dec;
+ PROGRAM_CNTXT_INIT(p, desc, 0);
+ if (ps)
+ PROGRAM_SET_36BIT_ADDR(p);

- init_sh_desc_key_aead(desc, ctx, keys_fit_inline);
+ init_sh_desc_key_aead(p, ctx, keys_fit_inline);

/* Class 2 operation */
- append_operation(desc, ctx->class2_alg_type |
- OP_ALG_AS_INITFINAL | OP_ALG_DECRYPT | OP_ALG_ICV_ON);
+ ALG_OPERATION(p, ctx->class2_alg_type & OP_ALG_ALGSEL_MASK,
+ ctx->class2_alg_type & OP_ALG_AAI_MASK,
+ OP_ALG_AS_INITFINAL, ICV_CHECK_ENABLE, DIR_DEC);

/* assoclen + cryptlen = seqinlen - ivsize - authsize */
- append_math_sub_imm_u32(desc, REG3, SEQINLEN, IMM,
- ctx->authsize + tfm->ivsize);
+ MATHB(p, SEQINSZ, SUB, ctx->authsize + tfm->ivsize, MATH3, CAAM_CMD_SZ,
+ IMMED2);
/* assoclen = (assoclen + cryptlen) - cryptlen */
- append_math_sub(desc, REG2, SEQOUTLEN, REG0, CAAM_CMD_SZ);
- append_math_sub(desc, VARSEQINLEN, REG3, REG2, CAAM_CMD_SZ);
+ MATHB(p, SEQOUTSZ, SUB, MATH0, MATH2, CAAM_CMD_SZ, 0);
+ MATHB(p, MATH3, SUB, MATH2, VSEQINSZ, CAAM_CMD_SZ, 0);

/* read assoc before reading payload */
- append_seq_fifo_load(desc, 0, FIFOLD_CLASS_CLASS2 | FIFOLD_TYPE_MSG |
- KEY_VLF);
+ SEQFIFOLOAD(p, MSG2, 0 , VLF);

- aead_append_ld_iv(desc, tfm->ivsize);
+ aead_append_ld_iv(p, tfm->ivsize);

- append_dec_op1(desc, ctx->class1_alg_type);
+ append_dec_op1(p, ctx->class1_alg_type);

/* Read and write cryptlen bytes */
- append_math_add(desc, VARSEQINLEN, ZERO, REG2, CAAM_CMD_SZ);
- append_math_add(desc, VARSEQOUTLEN, ZERO, REG2, CAAM_CMD_SZ);
- aead_append_src_dst(desc, FIFOLD_TYPE_MSG);
+ MATHB(p, ZERO, ADD, MATH2, VSEQINSZ, CAAM_CMD_SZ, 0);
+ MATHB(p, ZERO, ADD, MATH2, VSEQOUTSZ, CAAM_CMD_SZ, 0);
+
+ /* Read and write payload */
+ SEQFIFOSTORE(p, MSG, 0, 0, VLF);
+ SEQFIFOLOAD(p, MSGINSNOOP, 0, VLF | LAST1 | LAST2);

/* Load ICV */
- append_seq_fifo_load(desc, ctx->authsize, FIFOLD_CLASS_CLASS2 |
- FIFOLD_TYPE_LAST2 | FIFOLD_TYPE_ICV);
+ SEQFIFOLOAD(p, ICV2, ctx->authsize, LAST2);

- ctx->sh_desc_dec_dma = dma_map_single(jrdev, desc,
- desc_bytes(desc),
+ PROGRAM_FINALIZE(p);
+
+ desc_bytes = DESC_BYTES(desc);
+ ctx->sh_desc_dec_dma = dma_map_single(jrdev, desc, desc_bytes,
DMA_TO_DEVICE);
if (dma_mapping_error(jrdev, ctx->sh_desc_dec_dma)) {
dev_err(jrdev, "unable to map shared descriptor\n");
@@ -528,8 +551,7 @@ static int aead_set_sh_desc(struct crypto_aead *aead)
}
#ifdef DEBUG
print_hex_dump(KERN_ERR, "aead dec shdesc@"__stringify(__LINE__)": ",
- DUMP_PREFIX_ADDRESS, 16, 4, desc,
- desc_bytes(desc), 1);
+ DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes, 1);
#endif

/*
@@ -544,67 +566,69 @@ static int aead_set_sh_desc(struct crypto_aead *aead)

/* aead_givencrypt shared descriptor */
desc = ctx->sh_desc_givenc;
+ PROGRAM_CNTXT_INIT(p, desc, 0);
+ if (ps)
+ PROGRAM_SET_36BIT_ADDR(p);

- init_sh_desc_key_aead(desc, ctx, keys_fit_inline);
+ init_sh_desc_key_aead(p, ctx, keys_fit_inline);

/* Generate IV */
geniv = NFIFOENTRY_STYPE_PAD | NFIFOENTRY_DEST_DECO |
NFIFOENTRY_DTYPE_MSG | NFIFOENTRY_LC1 |
NFIFOENTRY_PTYPE_RND | (tfm->ivsize << NFIFOENTRY_DLEN_SHIFT);
- append_load_imm_u32(desc, geniv, LDST_CLASS_IND_CCB |
- LDST_SRCDST_WORD_INFO_FIFO | LDST_IMM);
- append_cmd(desc, CMD_LOAD | DISABLE_AUTO_INFO_FIFO);
- append_move(desc, MOVE_SRC_INFIFO |
- MOVE_DEST_CLASS1CTX | (tfm->ivsize << MOVE_LEN_SHIFT));
- append_cmd(desc, CMD_LOAD | ENABLE_AUTO_INFO_FIFO);
+ LOAD(p, geniv, NFIFO, 0, CAAM_CMD_SZ, IMMED);
+ LOAD(p, 0, DCTRL, LDOFF_DISABLE_AUTO_NFIFO, 0, IMMED);
+ MOVE(p, IFIFOABD, 0, CONTEXT1, 0, tfm->ivsize, IMMED);
+ LOAD(p, 0, DCTRL, LDOFF_ENABLE_AUTO_NFIFO, 0, IMMED);

/* Copy IV to class 1 context */
- append_move(desc, MOVE_SRC_CLASS1CTX |
- MOVE_DEST_OUTFIFO | (tfm->ivsize << MOVE_LEN_SHIFT));
+ MOVE(p, CONTEXT1, 0, OFIFO, 0, tfm->ivsize, IMMED);

/* Return to encryption */
- append_operation(desc, ctx->class2_alg_type |
- OP_ALG_AS_INITFINAL | OP_ALG_ENCRYPT);
+ ALG_OPERATION(p, ctx->class2_alg_type & OP_ALG_ALGSEL_MASK,
+ ctx->class2_alg_type & OP_ALG_AAI_MASK,
+ OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, DIR_ENC);

/* ivsize + cryptlen = seqoutlen - authsize */
- append_math_sub_imm_u32(desc, REG3, SEQOUTLEN, IMM, ctx->authsize);
+ MATHB(p, SEQOUTSZ, SUB, ctx->authsize, MATH3, CAAM_CMD_SZ, IMMED2);

/* assoclen = seqinlen - (ivsize + cryptlen) */
- append_math_sub(desc, VARSEQINLEN, SEQINLEN, REG3, CAAM_CMD_SZ);
+ MATHB(p, SEQINSZ, SUB, MATH3, VSEQINSZ, CAAM_CMD_SZ, 0);

/* read assoc before reading payload */
- append_seq_fifo_load(desc, 0, FIFOLD_CLASS_CLASS2 | FIFOLD_TYPE_MSG |
- KEY_VLF);
+ SEQFIFOLOAD(p, MSG2, 0, VLF);

/* Copy iv from class 1 ctx to class 2 fifo*/
moveiv = NFIFOENTRY_STYPE_OFIFO | NFIFOENTRY_DEST_CLASS2 |
NFIFOENTRY_DTYPE_MSG | (tfm->ivsize << NFIFOENTRY_DLEN_SHIFT);
- append_load_imm_u32(desc, moveiv, LDST_CLASS_IND_CCB |
- LDST_SRCDST_WORD_INFO_FIFO | LDST_IMM);
- append_load_imm_u32(desc, tfm->ivsize, LDST_CLASS_2_CCB |
- LDST_SRCDST_WORD_DATASZ_REG | LDST_IMM);
+ LOAD(p, moveiv, NFIFO, 0, CAAM_CMD_SZ, IMMED);
+ LOAD(p, tfm->ivsize, DATA2SZ, 0, CAAM_CMD_SZ, IMMED);

/* Class 1 operation */
- append_operation(desc, ctx->class1_alg_type |
- OP_ALG_AS_INITFINAL | OP_ALG_ENCRYPT);
+ ALG_OPERATION(p, ctx->class1_alg_type & OP_ALG_ALGSEL_MASK,
+ ctx->class1_alg_type & OP_ALG_AAI_MASK,
+ OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, DIR_ENC);

/* Will write ivsize + cryptlen */
- append_math_add(desc, VARSEQOUTLEN, SEQINLEN, REG0, CAAM_CMD_SZ);
+ MATHB(p, SEQINSZ, ADD, MATH0, VSEQOUTSZ, CAAM_CMD_SZ, 0);

/* Not need to reload iv */
- append_seq_fifo_load(desc, tfm->ivsize,
- FIFOLD_CLASS_SKIP);
+ SEQFIFOLOAD(p, SKIP, tfm->ivsize, 0);

/* Will read cryptlen */
- append_math_add(desc, VARSEQINLEN, SEQINLEN, REG0, CAAM_CMD_SZ);
- aead_append_src_dst(desc, FIFOLD_TYPE_MSG1OUT2);
+ MATHB(p, SEQINSZ, ADD, MATH0, VSEQINSZ, CAAM_CMD_SZ, 0);
+
+ /* Read and write payload */
+ SEQFIFOSTORE(p, MSG, 0, 0, VLF);
+ SEQFIFOLOAD(p, MSGOUTSNOOP, 0, VLF | LAST1 | LAST2);

/* Write ICV */
- append_seq_store(desc, ctx->authsize, LDST_CLASS_2_CCB |
- LDST_SRCDST_BYTE_CONTEXT);
+ SEQSTORE(p, CONTEXT2, 0, ctx->authsize, 0);

- ctx->sh_desc_givenc_dma = dma_map_single(jrdev, desc,
- desc_bytes(desc),
+ PROGRAM_FINALIZE(p);
+
+ desc_bytes = DESC_BYTES(desc);
+ ctx->sh_desc_givenc_dma = dma_map_single(jrdev, desc, desc_bytes,
DMA_TO_DEVICE);
if (dma_mapping_error(jrdev, ctx->sh_desc_givenc_dma)) {
dev_err(jrdev, "unable to map shared descriptor\n");
@@ -612,8 +636,7 @@ static int aead_set_sh_desc(struct crypto_aead *aead)
}
#ifdef DEBUG
print_hex_dump(KERN_ERR, "aead givenc shdesc@"__stringify(__LINE__)": ",
- DUMP_PREFIX_ADDRESS, 16, 4, desc,
- desc_bytes(desc), 1);
+ DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes, 1);
#endif

return 0;
@@ -633,16 +656,13 @@ static int aead_setauthsize(struct crypto_aead *authenc,
static u32 gen_split_aead_key(struct caam_ctx *ctx, const u8 *key_in,
u32 authkeylen)
{
- return gen_split_key(ctx->jrdev, ctx->key, ctx->split_key_len,
- ctx->split_key_pad_len, key_in, authkeylen,
- ctx->alg_op);
+ return gen_split_key(ctx->jrdev, ctx->key, ctx->split_key_pad_len,
+ key_in, authkeylen, ctx->class2_alg_type);
}

static int aead_setkey(struct crypto_aead *aead,
const u8 *key, unsigned int keylen)
{
- /* Sizes for MDHA pads (*not* keys): MD5, SHA1, 224, 256, 384, 512 */
- static const u8 mdpadlen[] = { 16, 20, 32, 32, 64, 64 };
struct caam_ctx *ctx = crypto_aead_ctx(aead);
struct device *jrdev = ctx->jrdev;
struct crypto_authenc_keys keys;
@@ -651,10 +671,11 @@ static int aead_setkey(struct crypto_aead *aead,
if (crypto_authenc_extractkeys(&keys, key, keylen) != 0)
goto badkey;

- /* Pick class 2 key length from algorithm submask */
- ctx->split_key_len = mdpadlen[(ctx->alg_op & OP_ALG_ALGSEL_SUBMASK) >>
- OP_ALG_ALGSEL_SHIFT] * 2;
- ctx->split_key_pad_len = ALIGN(ctx->split_key_len, 16);
+ /* Compute class 2 key length */
+ ctx->split_key_len = split_key_len(ctx->class2_alg_type &
+ OP_ALG_ALGSEL_MASK);
+ ctx->split_key_pad_len = split_key_pad_len(ctx->class2_alg_type &
+ OP_ALG_ALGSEL_MASK);

if (ctx->split_key_pad_len + keys.enckeylen > CAAM_MAX_KEY_SIZE)
goto badkey;
@@ -710,8 +731,12 @@ static int ablkcipher_setkey(struct crypto_ablkcipher *ablkcipher,
struct ablkcipher_tfm *tfm = &ablkcipher->base.crt_ablkcipher;
struct device *jrdev = ctx->jrdev;
int ret = 0;
- u32 *key_jump_cmd;
u32 *desc;
+ struct program prg;
+ struct program *p = &prg;
+ unsigned desc_bytes;
+ LABEL(key_jump_cmd);
+ REFERENCE(pkey_jump_cmd);

#ifdef DEBUG
print_hex_dump(KERN_ERR, "key in @"__stringify(__LINE__)": ",
@@ -729,31 +754,36 @@ static int ablkcipher_setkey(struct crypto_ablkcipher *ablkcipher,

/* ablkcipher_encrypt shared descriptor */
desc = ctx->sh_desc_enc;
- init_sh_desc(desc, HDR_SHARE_SERIAL);
+ PROGRAM_CNTXT_INIT(p, desc, 0);
+ if (ps)
+ PROGRAM_SET_36BIT_ADDR(p);
+
+ SHR_HDR(p, SHR_SERIAL, 1, 0);
/* Skip if already shared */
- key_jump_cmd = append_jump(desc, JUMP_JSL | JUMP_TEST_ALL |
- JUMP_COND_SHRD);
+ pkey_jump_cmd = JUMP(p, key_jump_cmd, LOCAL_JUMP, ALL_TRUE, SHRD);

/* Load class1 key only */
- append_key_as_imm(desc, (void *)ctx->key, ctx->enckeylen,
- ctx->enckeylen, CLASS_1 |
- KEY_DEST_CLASS_REG);
+ KEY(p, KEY1, 0, (uintptr_t)ctx->key, ctx->enckeylen, IMMED | COPY);

- set_jump_tgt_here(desc, key_jump_cmd);
+ SET_LABEL(p, key_jump_cmd);

- /* Load iv */
- append_cmd(desc, CMD_SEQ_LOAD | LDST_SRCDST_BYTE_CONTEXT |
- LDST_CLASS_1_CCB | tfm->ivsize);
+ /* Load IV */
+ SEQLOAD(p, CONTEXT1, 0, tfm->ivsize, 0);

/* Load operation */
- append_operation(desc, ctx->class1_alg_type |
- OP_ALG_AS_INITFINAL | OP_ALG_ENCRYPT);
+ ALG_OPERATION(p, ctx->class1_alg_type & OP_ALG_ALGSEL_MASK,
+ ctx->class1_alg_type & OP_ALG_AAI_MASK,
+ OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, DIR_ENC);

/* Perform operation */
- ablkcipher_append_src_dst(desc);
+ ablkcipher_append_src_dst(p);
+
+ PATCH_JUMP(p, pkey_jump_cmd, key_jump_cmd);
+
+ PROGRAM_FINALIZE(p);

- ctx->sh_desc_enc_dma = dma_map_single(jrdev, desc,
- desc_bytes(desc),
+ desc_bytes = DESC_BYTES(desc);
+ ctx->sh_desc_enc_dma = dma_map_single(jrdev, desc, desc_bytes,
DMA_TO_DEVICE);
if (dma_mapping_error(jrdev, ctx->sh_desc_enc_dma)) {
dev_err(jrdev, "unable to map shared descriptor\n");
@@ -762,36 +792,40 @@ static int ablkcipher_setkey(struct crypto_ablkcipher *ablkcipher,
#ifdef DEBUG
print_hex_dump(KERN_ERR,
"ablkcipher enc shdesc@"__stringify(__LINE__)": ",
- DUMP_PREFIX_ADDRESS, 16, 4, desc,
- desc_bytes(desc), 1);
+ DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes, 1);
#endif
+
/* ablkcipher_decrypt shared descriptor */
desc = ctx->sh_desc_dec;
+ PROGRAM_CNTXT_INIT(p, desc, 0);
+ if (ps)
+ PROGRAM_SET_36BIT_ADDR(p);
+
+ SHR_HDR(p, SHR_SERIAL, 1, 0);

- init_sh_desc(desc, HDR_SHARE_SERIAL);
/* Skip if already shared */
- key_jump_cmd = append_jump(desc, JUMP_JSL | JUMP_TEST_ALL |
- JUMP_COND_SHRD);
+ pkey_jump_cmd = JUMP(p, key_jump_cmd, LOCAL_JUMP, ALL_TRUE, SHRD);

/* Load class1 key only */
- append_key_as_imm(desc, (void *)ctx->key, ctx->enckeylen,
- ctx->enckeylen, CLASS_1 |
- KEY_DEST_CLASS_REG);
+ KEY(p, KEY1, 0, (uintptr_t)ctx->key, ctx->enckeylen, IMMED | COPY);

- set_jump_tgt_here(desc, key_jump_cmd);
+ SET_LABEL(p, key_jump_cmd);

/* load IV */
- append_cmd(desc, CMD_SEQ_LOAD | LDST_SRCDST_BYTE_CONTEXT |
- LDST_CLASS_1_CCB | tfm->ivsize);
+ SEQLOAD(p, CONTEXT1, 0, tfm->ivsize, 0);

/* Choose operation */
- append_dec_op1(desc, ctx->class1_alg_type);
+ append_dec_op1(p, ctx->class1_alg_type);

/* Perform operation */
- ablkcipher_append_src_dst(desc);
+ ablkcipher_append_src_dst(p);
+
+ PATCH_JUMP(p, pkey_jump_cmd, key_jump_cmd);
+
+ PROGRAM_FINALIZE(p);

- ctx->sh_desc_dec_dma = dma_map_single(jrdev, desc,
- desc_bytes(desc),
+ desc_bytes = DESC_BYTES(desc);
+ ctx->sh_desc_dec_dma = dma_map_single(jrdev, desc, desc_bytes,
DMA_TO_DEVICE);
if (dma_mapping_error(jrdev, ctx->sh_desc_dec_dma)) {
dev_err(jrdev, "unable to map shared descriptor\n");
@@ -801,8 +835,7 @@ static int ablkcipher_setkey(struct crypto_ablkcipher *ablkcipher,
#ifdef DEBUG
print_hex_dump(KERN_ERR,
"ablkcipher dec shdesc@"__stringify(__LINE__)": ",
- DUMP_PREFIX_ADDRESS, 16, 4, desc,
- desc_bytes(desc), 1);
+ DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes, 1);
#endif

return ret;
@@ -1081,9 +1114,11 @@ static void init_aead_job(u32 *sh_desc, dma_addr_t ptr,
int ivsize = crypto_aead_ivsize(aead);
int authsize = ctx->authsize;
u32 *desc = edesc->hw_desc;
- u32 out_options = 0, in_options;
+ u32 out_options = EXT, in_options = EXT;
dma_addr_t dst_dma, src_dma;
- int len, sec4_sg_index = 0;
+ unsigned len, sec4_sg_index = 0;
+ struct program prg;
+ struct program *p = &prg;

#ifdef DEBUG
debug("assoclen %d cryptlen %d authsize %d\n",
@@ -1098,25 +1133,28 @@ static void init_aead_job(u32 *sh_desc, dma_addr_t ptr,
DUMP_PREFIX_ADDRESS, 16, 4, sg_virt(req->src),
edesc->src_nents ? 100 : req->cryptlen, 1);
print_hex_dump(KERN_ERR, "shrdesc@"__stringify(__LINE__)": ",
- DUMP_PREFIX_ADDRESS, 16, 4, sh_desc,
- desc_bytes(sh_desc), 1);
+ DUMP_PREFIX_ADDRESS, 16, 4, sh_desc, DESC_BYTES(sh_desc),
+ 1);
#endif

- len = desc_len(sh_desc);
- init_job_desc_shared(desc, ptr, len, HDR_SHARE_DEFER | HDR_REVERSE);
+ len = DESC_LEN(sh_desc);
+ PROGRAM_CNTXT_INIT(p, desc, len);
+ if (ps)
+ PROGRAM_SET_36BIT_ADDR(p);
+
+ JOB_HDR(p, SHR_DEFER, len, ptr, REO | SHR);

if (all_contig) {
src_dma = sg_dma_address(req->assoc);
- in_options = 0;
} else {
src_dma = edesc->sec4_sg_dma;
sec4_sg_index += (edesc->assoc_nents ? : 1) + 1 +
(edesc->src_nents ? : 1);
- in_options = LDST_SGF;
+ in_options |= SGF;
}

- append_seq_in_ptr(desc, src_dma, req->assoclen + ivsize + req->cryptlen,
- in_options);
+ SEQINPTR(p, src_dma, req->assoclen + ivsize + req->cryptlen,
+ in_options);

if (likely(req->src == req->dst)) {
if (all_contig) {
@@ -1124,7 +1162,7 @@ static void init_aead_job(u32 *sh_desc, dma_addr_t ptr,
} else {
dst_dma = src_dma + sizeof(struct sec4_sg_entry) *
((edesc->assoc_nents ? : 1) + 1);
- out_options = LDST_SGF;
+ out_options |= SGF;
}
} else {
if (!edesc->dst_nents) {
@@ -1133,15 +1171,15 @@ static void init_aead_job(u32 *sh_desc, dma_addr_t ptr,
dst_dma = edesc->sec4_sg_dma +
sec4_sg_index *
sizeof(struct sec4_sg_entry);
- out_options = LDST_SGF;
+ out_options |= SGF;
}
}
if (encrypt)
- append_seq_out_ptr(desc, dst_dma, req->cryptlen + authsize,
- out_options);
+ SEQOUTPTR(p, dst_dma, req->cryptlen + authsize, out_options);
else
- append_seq_out_ptr(desc, dst_dma, req->cryptlen - authsize,
- out_options);
+ SEQOUTPTR(p, dst_dma, req->cryptlen - authsize, out_options);
+
+ PROGRAM_FINALIZE(p);
}

/*
@@ -1157,9 +1195,11 @@ static void init_aead_giv_job(u32 *sh_desc, dma_addr_t ptr,
int ivsize = crypto_aead_ivsize(aead);
int authsize = ctx->authsize;
u32 *desc = edesc->hw_desc;
- u32 out_options = 0, in_options;
+ u32 out_options = EXT, in_options = EXT;
dma_addr_t dst_dma, src_dma;
- int len, sec4_sg_index = 0;
+ unsigned len, sec4_sg_index = 0;
+ struct program prg;
+ struct program *p = &prg;

#ifdef DEBUG
debug("assoclen %d cryptlen %d authsize %d\n",
@@ -1173,23 +1213,26 @@ static void init_aead_giv_job(u32 *sh_desc, dma_addr_t ptr,
DUMP_PREFIX_ADDRESS, 16, 4, sg_virt(req->src),
edesc->src_nents > 1 ? 100 : req->cryptlen, 1);
print_hex_dump(KERN_ERR, "shrdesc@"__stringify(__LINE__)": ",
- DUMP_PREFIX_ADDRESS, 16, 4, sh_desc,
- desc_bytes(sh_desc), 1);
+ DUMP_PREFIX_ADDRESS, 16, 4, sh_desc, DESC_BYTES(sh_desc),
+ 1);
#endif

- len = desc_len(sh_desc);
- init_job_desc_shared(desc, ptr, len, HDR_SHARE_DEFER | HDR_REVERSE);
+ len = DESC_LEN(sh_desc);
+ PROGRAM_CNTXT_INIT(p, desc, len);
+ if (ps)
+ PROGRAM_SET_36BIT_ADDR(p);
+
+ JOB_HDR(p, SHR_DEFER, len, ptr, REO | SHR);

if (contig & GIV_SRC_CONTIG) {
src_dma = sg_dma_address(req->assoc);
- in_options = 0;
} else {
src_dma = edesc->sec4_sg_dma;
sec4_sg_index += edesc->assoc_nents + 1 + edesc->src_nents;
- in_options = LDST_SGF;
+ in_options |= SGF;
}
- append_seq_in_ptr(desc, src_dma, req->assoclen + ivsize + req->cryptlen,
- in_options);
+ SEQINPTR(p, src_dma, req->assoclen + ivsize + req->cryptlen,
+ in_options);

if (contig & GIV_DST_CONTIG) {
dst_dma = edesc->iv_dma;
@@ -1197,17 +1240,18 @@ static void init_aead_giv_job(u32 *sh_desc, dma_addr_t ptr,
if (likely(req->src == req->dst)) {
dst_dma = src_dma + sizeof(struct sec4_sg_entry) *
edesc->assoc_nents;
- out_options = LDST_SGF;
+ out_options |= SGF;
} else {
dst_dma = edesc->sec4_sg_dma +
sec4_sg_index *
sizeof(struct sec4_sg_entry);
- out_options = LDST_SGF;
+ out_options |= SGF;
}
}

- append_seq_out_ptr(desc, dst_dma, ivsize + req->cryptlen + authsize,
- out_options);
+ SEQOUTPTR(p, dst_dma, ivsize + req->cryptlen + authsize, out_options);
+
+ PROGRAM_FINALIZE(p);
}

/*
@@ -1221,9 +1265,11 @@ static void init_ablkcipher_job(u32 *sh_desc, dma_addr_t ptr,
struct crypto_ablkcipher *ablkcipher = crypto_ablkcipher_reqtfm(req);
int ivsize = crypto_ablkcipher_ivsize(ablkcipher);
u32 *desc = edesc->hw_desc;
- u32 out_options = 0, in_options;
+ u32 out_options = EXT, in_options = EXT;
dma_addr_t dst_dma, src_dma;
- int len, sec4_sg_index = 0;
+ unsigned len, sec4_sg_index = 0;
+ struct program prg;
+ struct program *p = &prg;

#ifdef DEBUG
print_hex_dump(KERN_ERR, "presciv@"__stringify(__LINE__)": ",
@@ -1234,18 +1280,21 @@ static void init_ablkcipher_job(u32 *sh_desc, dma_addr_t ptr,
edesc->src_nents ? 100 : req->nbytes, 1);
#endif

- len = desc_len(sh_desc);
- init_job_desc_shared(desc, ptr, len, HDR_SHARE_DEFER | HDR_REVERSE);
+ len = DESC_LEN(sh_desc);
+ PROGRAM_CNTXT_INIT(p, desc, len);
+ if (ps)
+ PROGRAM_SET_36BIT_ADDR(p);
+
+ JOB_HDR(p, SHR_DEFER, len, ptr, REO | SHR);

if (iv_contig) {
src_dma = edesc->iv_dma;
- in_options = 0;
} else {
src_dma = edesc->sec4_sg_dma;
sec4_sg_index += (iv_contig ? 0 : 1) + edesc->src_nents;
- in_options = LDST_SGF;
+ in_options |= SGF;
}
- append_seq_in_ptr(desc, src_dma, req->nbytes + ivsize, in_options);
+ SEQINPTR(p, src_dma, req->nbytes + ivsize, in_options);

if (likely(req->src == req->dst)) {
if (!edesc->src_nents && iv_contig) {
@@ -1253,7 +1302,7 @@ static void init_ablkcipher_job(u32 *sh_desc, dma_addr_t ptr,
} else {
dst_dma = edesc->sec4_sg_dma +
sizeof(struct sec4_sg_entry);
- out_options = LDST_SGF;
+ out_options |= SGF;
}
} else {
if (!edesc->dst_nents) {
@@ -1261,10 +1310,13 @@ static void init_ablkcipher_job(u32 *sh_desc, dma_addr_t ptr,
} else {
dst_dma = edesc->sec4_sg_dma +
sec4_sg_index * sizeof(struct sec4_sg_entry);
- out_options = LDST_SGF;
+ out_options |= SGF;
}
}
- append_seq_out_ptr(desc, dst_dma, req->nbytes, out_options);
+
+ SEQOUTPTR(p, dst_dma, req->nbytes, out_options);
+
+ PROGRAM_FINALIZE(p);
}

/*
@@ -1406,7 +1458,7 @@ static int aead_encrypt(struct aead_request *req)
#ifdef DEBUG
print_hex_dump(KERN_ERR, "aead jobdesc@"__stringify(__LINE__)": ",
DUMP_PREFIX_ADDRESS, 16, 4, edesc->hw_desc,
- desc_bytes(edesc->hw_desc), 1);
+ DESC_BYTES(edesc->hw_desc), 1);
#endif

desc = edesc->hw_desc;
@@ -1449,7 +1501,7 @@ static int aead_decrypt(struct aead_request *req)
#ifdef DEBUG
print_hex_dump(KERN_ERR, "aead jobdesc@"__stringify(__LINE__)": ",
DUMP_PREFIX_ADDRESS, 16, 4, edesc->hw_desc,
- desc_bytes(edesc->hw_desc), 1);
+ DESC_BYTES(edesc->hw_desc), 1);
#endif

desc = edesc->hw_desc;
@@ -1612,7 +1664,7 @@ static int aead_givencrypt(struct aead_givcrypt_request *areq)
#ifdef DEBUG
print_hex_dump(KERN_ERR, "aead jobdesc@"__stringify(__LINE__)": ",
DUMP_PREFIX_ADDRESS, 16, 4, edesc->hw_desc,
- desc_bytes(edesc->hw_desc), 1);
+ DESC_BYTES(edesc->hw_desc), 1);
#endif

desc = edesc->hw_desc;
@@ -1755,7 +1807,7 @@ static int ablkcipher_encrypt(struct ablkcipher_request *req)
#ifdef DEBUG
print_hex_dump(KERN_ERR, "ablkcipher jobdesc@"__stringify(__LINE__)": ",
DUMP_PREFIX_ADDRESS, 16, 4, edesc->hw_desc,
- desc_bytes(edesc->hw_desc), 1);
+ DESC_BYTES(edesc->hw_desc), 1);
#endif
desc = edesc->hw_desc;
ret = caam_jr_enqueue(jrdev, desc, ablkcipher_encrypt_done, req);
@@ -1793,7 +1845,7 @@ static int ablkcipher_decrypt(struct ablkcipher_request *req)
#ifdef DEBUG
print_hex_dump(KERN_ERR, "ablkcipher jobdesc@"__stringify(__LINE__)": ",
DUMP_PREFIX_ADDRESS, 16, 4, edesc->hw_desc,
- desc_bytes(edesc->hw_desc), 1);
+ DESC_BYTES(edesc->hw_desc), 1);
#endif

ret = caam_jr_enqueue(jrdev, desc, ablkcipher_decrypt_done, req);
@@ -1824,7 +1876,6 @@ struct caam_alg_template {
} template_u;
u32 class1_alg_type;
u32 class2_alg_type;
- u32 alg_op;
};

static struct caam_alg_template driver_algs[] = {
@@ -1846,7 +1897,6 @@ static struct caam_alg_template driver_algs[] = {
},
.class1_alg_type = 0,
.class2_alg_type = OP_ALG_ALGSEL_MD5 | OP_ALG_AAI_HMAC_PRECOMP,
- .alg_op = OP_ALG_ALGSEL_MD5 | OP_ALG_AAI_HMAC,
},
{
.name = "authenc(hmac(sha1),ecb(cipher_null))",
@@ -1865,7 +1915,6 @@ static struct caam_alg_template driver_algs[] = {
},
.class1_alg_type = 0,
.class2_alg_type = OP_ALG_ALGSEL_SHA1 | OP_ALG_AAI_HMAC_PRECOMP,
- .alg_op = OP_ALG_ALGSEL_SHA1 | OP_ALG_AAI_HMAC,
},
{
.name = "authenc(hmac(sha224),ecb(cipher_null))",
@@ -1885,7 +1934,6 @@ static struct caam_alg_template driver_algs[] = {
.class1_alg_type = 0,
.class2_alg_type = OP_ALG_ALGSEL_SHA224 |
OP_ALG_AAI_HMAC_PRECOMP,
- .alg_op = OP_ALG_ALGSEL_SHA224 | OP_ALG_AAI_HMAC,
},
{
.name = "authenc(hmac(sha256),ecb(cipher_null))",
@@ -1905,7 +1953,6 @@ static struct caam_alg_template driver_algs[] = {
.class1_alg_type = 0,
.class2_alg_type = OP_ALG_ALGSEL_SHA256 |
OP_ALG_AAI_HMAC_PRECOMP,
- .alg_op = OP_ALG_ALGSEL_SHA256 | OP_ALG_AAI_HMAC,
},
{
.name = "authenc(hmac(sha384),ecb(cipher_null))",
@@ -1925,7 +1972,6 @@ static struct caam_alg_template driver_algs[] = {
.class1_alg_type = 0,
.class2_alg_type = OP_ALG_ALGSEL_SHA384 |
OP_ALG_AAI_HMAC_PRECOMP,
- .alg_op = OP_ALG_ALGSEL_SHA384 | OP_ALG_AAI_HMAC,
},
{
.name = "authenc(hmac(sha512),ecb(cipher_null))",
@@ -1945,7 +1991,6 @@ static struct caam_alg_template driver_algs[] = {
.class1_alg_type = 0,
.class2_alg_type = OP_ALG_ALGSEL_SHA512 |
OP_ALG_AAI_HMAC_PRECOMP,
- .alg_op = OP_ALG_ALGSEL_SHA512 | OP_ALG_AAI_HMAC,
},
{
.name = "authenc(hmac(md5),cbc(aes))",
@@ -1964,7 +2009,6 @@ static struct caam_alg_template driver_algs[] = {
},
.class1_alg_type = OP_ALG_ALGSEL_AES | OP_ALG_AAI_CBC,
.class2_alg_type = OP_ALG_ALGSEL_MD5 | OP_ALG_AAI_HMAC_PRECOMP,
- .alg_op = OP_ALG_ALGSEL_MD5 | OP_ALG_AAI_HMAC,
},
{
.name = "authenc(hmac(sha1),cbc(aes))",
@@ -1983,7 +2027,6 @@ static struct caam_alg_template driver_algs[] = {
},
.class1_alg_type = OP_ALG_ALGSEL_AES | OP_ALG_AAI_CBC,
.class2_alg_type = OP_ALG_ALGSEL_SHA1 | OP_ALG_AAI_HMAC_PRECOMP,
- .alg_op = OP_ALG_ALGSEL_SHA1 | OP_ALG_AAI_HMAC,
},
{
.name = "authenc(hmac(sha224),cbc(aes))",
@@ -2003,7 +2046,6 @@ static struct caam_alg_template driver_algs[] = {
.class1_alg_type = OP_ALG_ALGSEL_AES | OP_ALG_AAI_CBC,
.class2_alg_type = OP_ALG_ALGSEL_SHA224 |
OP_ALG_AAI_HMAC_PRECOMP,
- .alg_op = OP_ALG_ALGSEL_SHA224 | OP_ALG_AAI_HMAC,
},
{
.name = "authenc(hmac(sha256),cbc(aes))",
@@ -2023,7 +2065,6 @@ static struct caam_alg_template driver_algs[] = {
.class1_alg_type = OP_ALG_ALGSEL_AES | OP_ALG_AAI_CBC,
.class2_alg_type = OP_ALG_ALGSEL_SHA256 |
OP_ALG_AAI_HMAC_PRECOMP,
- .alg_op = OP_ALG_ALGSEL_SHA256 | OP_ALG_AAI_HMAC,
},
{
.name = "authenc(hmac(sha384),cbc(aes))",
@@ -2043,7 +2084,6 @@ static struct caam_alg_template driver_algs[] = {
.class1_alg_type = OP_ALG_ALGSEL_AES | OP_ALG_AAI_CBC,
.class2_alg_type = OP_ALG_ALGSEL_SHA384 |
OP_ALG_AAI_HMAC_PRECOMP,
- .alg_op = OP_ALG_ALGSEL_SHA384 | OP_ALG_AAI_HMAC,
},

{
@@ -2064,7 +2104,6 @@ static struct caam_alg_template driver_algs[] = {
.class1_alg_type = OP_ALG_ALGSEL_AES | OP_ALG_AAI_CBC,
.class2_alg_type = OP_ALG_ALGSEL_SHA512 |
OP_ALG_AAI_HMAC_PRECOMP,
- .alg_op = OP_ALG_ALGSEL_SHA512 | OP_ALG_AAI_HMAC,
},
{
.name = "authenc(hmac(md5),cbc(des3_ede))",
@@ -2083,7 +2122,6 @@ static struct caam_alg_template driver_algs[] = {
},
.class1_alg_type = OP_ALG_ALGSEL_3DES | OP_ALG_AAI_CBC,
.class2_alg_type = OP_ALG_ALGSEL_MD5 | OP_ALG_AAI_HMAC_PRECOMP,
- .alg_op = OP_ALG_ALGSEL_MD5 | OP_ALG_AAI_HMAC,
},
{
.name = "authenc(hmac(sha1),cbc(des3_ede))",
@@ -2102,7 +2140,6 @@ static struct caam_alg_template driver_algs[] = {
},
.class1_alg_type = OP_ALG_ALGSEL_3DES | OP_ALG_AAI_CBC,
.class2_alg_type = OP_ALG_ALGSEL_SHA1 | OP_ALG_AAI_HMAC_PRECOMP,
- .alg_op = OP_ALG_ALGSEL_SHA1 | OP_ALG_AAI_HMAC,
},
{
.name = "authenc(hmac(sha224),cbc(des3_ede))",
@@ -2122,7 +2159,6 @@ static struct caam_alg_template driver_algs[] = {
.class1_alg_type = OP_ALG_ALGSEL_3DES | OP_ALG_AAI_CBC,
.class2_alg_type = OP_ALG_ALGSEL_SHA224 |
OP_ALG_AAI_HMAC_PRECOMP,
- .alg_op = OP_ALG_ALGSEL_SHA224 | OP_ALG_AAI_HMAC,
},
{
.name = "authenc(hmac(sha256),cbc(des3_ede))",
@@ -2142,7 +2178,6 @@ static struct caam_alg_template driver_algs[] = {
.class1_alg_type = OP_ALG_ALGSEL_3DES | OP_ALG_AAI_CBC,
.class2_alg_type = OP_ALG_ALGSEL_SHA256 |
OP_ALG_AAI_HMAC_PRECOMP,
- .alg_op = OP_ALG_ALGSEL_SHA256 | OP_ALG_AAI_HMAC,
},
{
.name = "authenc(hmac(sha384),cbc(des3_ede))",
@@ -2162,7 +2197,6 @@ static struct caam_alg_template driver_algs[] = {
.class1_alg_type = OP_ALG_ALGSEL_3DES | OP_ALG_AAI_CBC,
.class2_alg_type = OP_ALG_ALGSEL_SHA384 |
OP_ALG_AAI_HMAC_PRECOMP,
- .alg_op = OP_ALG_ALGSEL_SHA384 | OP_ALG_AAI_HMAC,
},
{
.name = "authenc(hmac(sha512),cbc(des3_ede))",
@@ -2182,7 +2216,6 @@ static struct caam_alg_template driver_algs[] = {
.class1_alg_type = OP_ALG_ALGSEL_3DES | OP_ALG_AAI_CBC,
.class2_alg_type = OP_ALG_ALGSEL_SHA512 |
OP_ALG_AAI_HMAC_PRECOMP,
- .alg_op = OP_ALG_ALGSEL_SHA512 | OP_ALG_AAI_HMAC,
},
{
.name = "authenc(hmac(md5),cbc(des))",
@@ -2201,7 +2234,6 @@ static struct caam_alg_template driver_algs[] = {
},
.class1_alg_type = OP_ALG_ALGSEL_DES | OP_ALG_AAI_CBC,
.class2_alg_type = OP_ALG_ALGSEL_MD5 | OP_ALG_AAI_HMAC_PRECOMP,
- .alg_op = OP_ALG_ALGSEL_MD5 | OP_ALG_AAI_HMAC,
},
{
.name = "authenc(hmac(sha1),cbc(des))",
@@ -2220,7 +2252,6 @@ static struct caam_alg_template driver_algs[] = {
},
.class1_alg_type = OP_ALG_ALGSEL_DES | OP_ALG_AAI_CBC,
.class2_alg_type = OP_ALG_ALGSEL_SHA1 | OP_ALG_AAI_HMAC_PRECOMP,
- .alg_op = OP_ALG_ALGSEL_SHA1 | OP_ALG_AAI_HMAC,
},
{
.name = "authenc(hmac(sha224),cbc(des))",
@@ -2240,7 +2271,6 @@ static struct caam_alg_template driver_algs[] = {
.class1_alg_type = OP_ALG_ALGSEL_DES | OP_ALG_AAI_CBC,
.class2_alg_type = OP_ALG_ALGSEL_SHA224 |
OP_ALG_AAI_HMAC_PRECOMP,
- .alg_op = OP_ALG_ALGSEL_SHA224 | OP_ALG_AAI_HMAC,
},
{
.name = "authenc(hmac(sha256),cbc(des))",
@@ -2260,7 +2290,6 @@ static struct caam_alg_template driver_algs[] = {
.class1_alg_type = OP_ALG_ALGSEL_DES | OP_ALG_AAI_CBC,
.class2_alg_type = OP_ALG_ALGSEL_SHA256 |
OP_ALG_AAI_HMAC_PRECOMP,
- .alg_op = OP_ALG_ALGSEL_SHA256 | OP_ALG_AAI_HMAC,
},
{
.name = "authenc(hmac(sha384),cbc(des))",
@@ -2280,7 +2309,6 @@ static struct caam_alg_template driver_algs[] = {
.class1_alg_type = OP_ALG_ALGSEL_DES | OP_ALG_AAI_CBC,
.class2_alg_type = OP_ALG_ALGSEL_SHA384 |
OP_ALG_AAI_HMAC_PRECOMP,
- .alg_op = OP_ALG_ALGSEL_SHA384 | OP_ALG_AAI_HMAC,
},
{
.name = "authenc(hmac(sha512),cbc(des))",
@@ -2300,7 +2328,6 @@ static struct caam_alg_template driver_algs[] = {
.class1_alg_type = OP_ALG_ALGSEL_DES | OP_ALG_AAI_CBC,
.class2_alg_type = OP_ALG_ALGSEL_SHA512 |
OP_ALG_AAI_HMAC_PRECOMP,
- .alg_op = OP_ALG_ALGSEL_SHA512 | OP_ALG_AAI_HMAC,
},
/* ablkcipher descriptor */
{
@@ -2357,7 +2384,6 @@ struct caam_crypto_alg {
struct list_head entry;
int class1_alg_type;
int class2_alg_type;
- int alg_op;
struct crypto_alg crypto_alg;
};

@@ -2377,7 +2403,6 @@ static int caam_cra_init(struct crypto_tfm *tfm)
/* copy descriptor header template value */
ctx->class1_alg_type = OP_TYPE_CLASS1_ALG | caam_alg->class1_alg_type;
ctx->class2_alg_type = OP_TYPE_CLASS2_ALG | caam_alg->class2_alg_type;
- ctx->alg_op = OP_TYPE_CLASS2_ALG | caam_alg->alg_op;

return 0;
}
@@ -2389,15 +2414,15 @@ static void caam_cra_exit(struct crypto_tfm *tfm)
if (ctx->sh_desc_enc_dma &&
!dma_mapping_error(ctx->jrdev, ctx->sh_desc_enc_dma))
dma_unmap_single(ctx->jrdev, ctx->sh_desc_enc_dma,
- desc_bytes(ctx->sh_desc_enc), DMA_TO_DEVICE);
+ DESC_BYTES(ctx->sh_desc_enc), DMA_TO_DEVICE);
if (ctx->sh_desc_dec_dma &&
!dma_mapping_error(ctx->jrdev, ctx->sh_desc_dec_dma))
dma_unmap_single(ctx->jrdev, ctx->sh_desc_dec_dma,
- desc_bytes(ctx->sh_desc_dec), DMA_TO_DEVICE);
+ DESC_BYTES(ctx->sh_desc_dec), DMA_TO_DEVICE);
if (ctx->sh_desc_givenc_dma &&
!dma_mapping_error(ctx->jrdev, ctx->sh_desc_givenc_dma))
dma_unmap_single(ctx->jrdev, ctx->sh_desc_givenc_dma,
- desc_bytes(ctx->sh_desc_givenc),
+ DESC_BYTES(ctx->sh_desc_givenc),
DMA_TO_DEVICE);
if (ctx->key_dma &&
!dma_mapping_error(ctx->jrdev, ctx->key_dma))
@@ -2462,7 +2487,6 @@ static struct caam_crypto_alg *caam_alg_alloc(struct caam_alg_template

t_alg->class1_alg_type = template->class1_alg_type;
t_alg->class2_alg_type = template->class2_alg_type;
- t_alg->alg_op = template->alg_op;

return t_alg;
}
diff --git a/drivers/crypto/caam/caamhash.c b/drivers/crypto/caam/caamhash.c
index 386efb9e192c..529e3ca92406 100644
--- a/drivers/crypto/caam/caamhash.c
+++ b/drivers/crypto/caam/caamhash.c
@@ -57,7 +57,8 @@

#include "regs.h"
#include "intern.h"
-#include "desc_constr.h"
+#include "flib/rta.h"
+#include "flib/desc/common.h"
#include "jr.h"
#include "error.h"
#include "sg_sw_sec4.h"
@@ -96,6 +97,7 @@


static struct list_head hash_list;
+static const bool ps = (sizeof(dma_addr_t) == sizeof(u64));

/* ahash per-session context */
struct caam_hash_ctx {
@@ -111,7 +113,6 @@ struct caam_hash_ctx {
dma_addr_t sh_desc_digest_dma;
dma_addr_t sh_desc_finup_dma;
u32 alg_type;
- u32 alg_op;
u8 key[CAAM_MAX_HASH_KEY_SIZE];
dma_addr_t key_dma;
int ctx_len;
@@ -137,7 +138,7 @@ struct caam_hash_state {
/* Common job descriptor seq in/out ptr routines */

/* Map state->caam_ctx, and append seq_out_ptr command that points to it */
-static inline int map_seq_out_ptr_ctx(u32 *desc, struct device *jrdev,
+static inline int map_seq_out_ptr_ctx(struct program *p, struct device *jrdev,
struct caam_hash_state *state,
int ctx_len)
{
@@ -148,19 +149,20 @@ static inline int map_seq_out_ptr_ctx(u32 *desc, struct device *jrdev,
return -ENOMEM;
}

- append_seq_out_ptr(desc, state->ctx_dma, ctx_len, 0);
+ SEQOUTPTR(p, state->ctx_dma, ctx_len, EXT);

return 0;
}

/* Map req->result, and append seq_out_ptr command that points to it */
-static inline dma_addr_t map_seq_out_ptr_result(u32 *desc, struct device *jrdev,
+static inline dma_addr_t map_seq_out_ptr_result(struct program *p,
+ struct device *jrdev,
u8 *result, int digestsize)
{
dma_addr_t dst_dma;

dst_dma = dma_map_single(jrdev, result, digestsize, DMA_FROM_DEVICE);
- append_seq_out_ptr(desc, dst_dma, digestsize, 0);
+ SEQOUTPTR(p, dst_dma, digestsize, EXT);

return dst_dma;
}
@@ -224,28 +226,32 @@ static inline int ctx_map_to_sec4_sg(u32 *desc, struct device *jrdev,
}

/* Common shared descriptor commands */
-static inline void append_key_ahash(u32 *desc, struct caam_hash_ctx *ctx)
+static inline void append_key_ahash(struct program *p,
+ struct caam_hash_ctx *ctx)
{
- append_key_as_imm(desc, ctx->key, ctx->split_key_pad_len,
- ctx->split_key_len, CLASS_2 |
- KEY_DEST_MDHA_SPLIT | KEY_ENC);
+ KEY(p, MDHA_SPLIT_KEY, ENC, (uintptr_t)ctx->key,
+ ctx->split_key_len, IMMED | COPY);
}

/* Append key if it has been set */
-static inline void init_sh_desc_key_ahash(u32 *desc, struct caam_hash_ctx *ctx)
+static inline void init_sh_desc_key_ahash(struct program *p,
+ struct caam_hash_ctx *ctx)
{
- u32 *key_jump_cmd;
+ LABEL(key_jump_cmd);
+ REFERENCE(pkey_jump_cmd);

- init_sh_desc(desc, HDR_SHARE_SERIAL);
+ SHR_HDR(p, SHR_SERIAL, 1, 0);

if (ctx->split_key_len) {
/* Skip if already shared */
- key_jump_cmd = append_jump(desc, JUMP_JSL | JUMP_TEST_ALL |
- JUMP_COND_SHRD);
+ pkey_jump_cmd = JUMP(p, key_jump_cmd, LOCAL_JUMP, ALL_TRUE,
+ SHRD);

- append_key_ahash(desc, ctx);
+ append_key_ahash(p, ctx);

- set_jump_tgt_here(desc, key_jump_cmd);
+ SET_LABEL(p, key_jump_cmd);
+
+ PATCH_JUMP(p, pkey_jump_cmd, key_jump_cmd);
}
}

@@ -254,55 +260,54 @@ static inline void init_sh_desc_key_ahash(u32 *desc, struct caam_hash_ctx *ctx)
* and write resulting class2 context to seqout, which may be state->caam_ctx
* or req->result
*/
-static inline void ahash_append_load_str(u32 *desc, int digestsize)
+static inline void ahash_append_load_str(struct program *p, int digestsize)
{
/* Calculate remaining bytes to read */
- append_math_add(desc, VARSEQINLEN, SEQINLEN, REG0, CAAM_CMD_SZ);
+ MATHB(p, SEQINSZ, ADD, MATH0, VSEQINSZ, CAAM_CMD_SZ, 0);

/* Read remaining bytes */
- append_seq_fifo_load(desc, 0, FIFOLD_CLASS_CLASS2 | FIFOLD_TYPE_LAST2 |
- FIFOLD_TYPE_MSG | KEY_VLF);
+ SEQFIFOLOAD(p, MSG2, 0, VLF | LAST2);

/* Store class2 context bytes */
- append_seq_store(desc, digestsize, LDST_CLASS_2_CCB |
- LDST_SRCDST_BYTE_CONTEXT);
+ SEQSTORE(p, CONTEXT2, 0, digestsize, 0);
}

/*
* For ahash update, final and finup, import context, read and write to seqout
*/
-static inline void ahash_ctx_data_to_out(u32 *desc, u32 op, u32 state,
+static inline void ahash_ctx_data_to_out(struct program *p, u32 op, u32 state,
int digestsize,
struct caam_hash_ctx *ctx)
{
- init_sh_desc_key_ahash(desc, ctx);
+ init_sh_desc_key_ahash(p, ctx);

/* Import context from software */
- append_cmd(desc, CMD_SEQ_LOAD | LDST_SRCDST_BYTE_CONTEXT |
- LDST_CLASS_2_CCB | ctx->ctx_len);
+ SEQLOAD(p, CONTEXT2, 0, ctx->ctx_len, 0);

/* Class 2 operation */
- append_operation(desc, op | state | OP_ALG_ENCRYPT);
+ ALG_OPERATION(p, op & OP_ALG_ALGSEL_MASK, op & OP_ALG_AAI_MASK, state,
+ ICV_CHECK_DISABLE, DIR_ENC);

/*
* Load from buf and/or src and write to req->result or state->context
*/
- ahash_append_load_str(desc, digestsize);
+ ahash_append_load_str(p, digestsize);
}

/* For ahash firsts and digest, read and write to seqout */
-static inline void ahash_data_to_out(u32 *desc, u32 op, u32 state,
+static inline void ahash_data_to_out(struct program *p, u32 op, u32 state,
int digestsize, struct caam_hash_ctx *ctx)
{
- init_sh_desc_key_ahash(desc, ctx);
+ init_sh_desc_key_ahash(p, ctx);

/* Class 2 operation */
- append_operation(desc, op | state | OP_ALG_ENCRYPT);
+ ALG_OPERATION(p, op & OP_ALG_ALGSEL_MASK, op & OP_ALG_AAI_MASK, state,
+ ICV_CHECK_DISABLE, DIR_ENC);

/*
* Load from buf and/or src and write to req->result or state->context
*/
- ahash_append_load_str(desc, digestsize);
+ ahash_append_load_str(p, digestsize);
}

static int ahash_set_sh_desc(struct crypto_ahash *ahash)
@@ -312,27 +317,34 @@ static int ahash_set_sh_desc(struct crypto_ahash *ahash)
struct device *jrdev = ctx->jrdev;
u32 have_key = 0;
u32 *desc;
+ struct program prg;
+ struct program *p = &prg;

if (ctx->split_key_len)
have_key = OP_ALG_AAI_HMAC_PRECOMP;

/* ahash_update shared descriptor */
desc = ctx->sh_desc_update;
+ PROGRAM_CNTXT_INIT(p, desc, 0);
+ if (ps)
+ PROGRAM_SET_36BIT_ADDR(p);

- init_sh_desc(desc, HDR_SHARE_SERIAL);
+ SHR_HDR(p, SHR_SERIAL, 1, 0);

/* Import context from software */
- append_cmd(desc, CMD_SEQ_LOAD | LDST_SRCDST_BYTE_CONTEXT |
- LDST_CLASS_2_CCB | ctx->ctx_len);
+ SEQLOAD(p, CONTEXT2, 0, ctx->ctx_len, 0);

/* Class 2 operation */
- append_operation(desc, ctx->alg_type | OP_ALG_AS_UPDATE |
- OP_ALG_ENCRYPT);
+ ALG_OPERATION(p, ctx->alg_type & OP_ALG_ALGSEL_MASK,
+ ctx->alg_type & OP_ALG_AAI_MASK, OP_ALG_AS_UPDATE,
+ ICV_CHECK_DISABLE, DIR_ENC);

/* Load data and write to result or context */
- ahash_append_load_str(desc, ctx->ctx_len);
+ ahash_append_load_str(p, ctx->ctx_len);
+
+ PROGRAM_FINALIZE(p);

- ctx->sh_desc_update_dma = dma_map_single(jrdev, desc, desc_bytes(desc),
+ ctx->sh_desc_update_dma = dma_map_single(jrdev, desc, DESC_BYTES(desc),
DMA_TO_DEVICE);
if (dma_mapping_error(jrdev, ctx->sh_desc_update_dma)) {
dev_err(jrdev, "unable to map shared descriptor\n");
@@ -341,17 +353,22 @@ static int ahash_set_sh_desc(struct crypto_ahash *ahash)
#ifdef DEBUG
print_hex_dump(KERN_ERR,
"ahash update shdesc@"__stringify(__LINE__)": ",
- DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc), 1);
+ DUMP_PREFIX_ADDRESS, 16, 4, desc, DESC_BYTES(desc), 1);
#endif

/* ahash_update_first shared descriptor */
desc = ctx->sh_desc_update_first;
+ PROGRAM_CNTXT_INIT(p, desc, 0);
+ if (ps)
+ PROGRAM_SET_36BIT_ADDR(p);

- ahash_data_to_out(desc, have_key | ctx->alg_type, OP_ALG_AS_INIT,
+ ahash_data_to_out(p, have_key | ctx->alg_type, OP_ALG_AS_INIT,
ctx->ctx_len, ctx);

+ PROGRAM_FINALIZE(p);
+
ctx->sh_desc_update_first_dma = dma_map_single(jrdev, desc,
- desc_bytes(desc),
+ DESC_BYTES(desc),
DMA_TO_DEVICE);
if (dma_mapping_error(jrdev, ctx->sh_desc_update_first_dma)) {
dev_err(jrdev, "unable to map shared descriptor\n");
@@ -360,16 +377,21 @@ static int ahash_set_sh_desc(struct crypto_ahash *ahash)
#ifdef DEBUG
print_hex_dump(KERN_ERR,
"ahash update first shdesc@"__stringify(__LINE__)": ",
- DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc), 1);
+ DUMP_PREFIX_ADDRESS, 16, 4, desc, DESC_BYTES(desc), 1);
#endif

/* ahash_final shared descriptor */
desc = ctx->sh_desc_fin;
+ PROGRAM_CNTXT_INIT(p, desc, 0);
+ if (ps)
+ PROGRAM_SET_36BIT_ADDR(p);

- ahash_ctx_data_to_out(desc, have_key | ctx->alg_type,
- OP_ALG_AS_FINALIZE, digestsize, ctx);
+ ahash_ctx_data_to_out(p, have_key | ctx->alg_type, OP_ALG_AS_FINALIZE,
+ digestsize, ctx);

- ctx->sh_desc_fin_dma = dma_map_single(jrdev, desc, desc_bytes(desc),
+ PROGRAM_FINALIZE(p);
+
+ ctx->sh_desc_fin_dma = dma_map_single(jrdev, desc, DESC_BYTES(desc),
DMA_TO_DEVICE);
if (dma_mapping_error(jrdev, ctx->sh_desc_fin_dma)) {
dev_err(jrdev, "unable to map shared descriptor\n");
@@ -377,17 +399,21 @@ static int ahash_set_sh_desc(struct crypto_ahash *ahash)
}
#ifdef DEBUG
print_hex_dump(KERN_ERR, "ahash final shdesc@"__stringify(__LINE__)": ",
- DUMP_PREFIX_ADDRESS, 16, 4, desc,
- desc_bytes(desc), 1);
+ DUMP_PREFIX_ADDRESS, 16, 4, desc, DESC_BYTES(desc), 1);
#endif

/* ahash_finup shared descriptor */
desc = ctx->sh_desc_finup;
+ PROGRAM_CNTXT_INIT(p, desc, 0);
+ if (ps)
+ PROGRAM_SET_36BIT_ADDR(p);
+
+ ahash_ctx_data_to_out(p, have_key | ctx->alg_type, OP_ALG_AS_FINALIZE,
+ digestsize, ctx);

- ahash_ctx_data_to_out(desc, have_key | ctx->alg_type,
- OP_ALG_AS_FINALIZE, digestsize, ctx);
+ PROGRAM_FINALIZE(p);

- ctx->sh_desc_finup_dma = dma_map_single(jrdev, desc, desc_bytes(desc),
+ ctx->sh_desc_finup_dma = dma_map_single(jrdev, desc, DESC_BYTES(desc),
DMA_TO_DEVICE);
if (dma_mapping_error(jrdev, ctx->sh_desc_finup_dma)) {
dev_err(jrdev, "unable to map shared descriptor\n");
@@ -395,18 +421,21 @@ static int ahash_set_sh_desc(struct crypto_ahash *ahash)
}
#ifdef DEBUG
print_hex_dump(KERN_ERR, "ahash finup shdesc@"__stringify(__LINE__)": ",
- DUMP_PREFIX_ADDRESS, 16, 4, desc,
- desc_bytes(desc), 1);
+ DUMP_PREFIX_ADDRESS, 16, 4, desc, DESC_BYTES(desc), 1);
#endif

/* ahash_digest shared descriptor */
desc = ctx->sh_desc_digest;
+ PROGRAM_CNTXT_INIT(p, desc, 0);
+ if (ps)
+ PROGRAM_SET_36BIT_ADDR(p);

- ahash_data_to_out(desc, have_key | ctx->alg_type, OP_ALG_AS_INITFINAL,
+ ahash_data_to_out(p, have_key | ctx->alg_type, OP_ALG_AS_INITFINAL,
digestsize, ctx);

- ctx->sh_desc_digest_dma = dma_map_single(jrdev, desc,
- desc_bytes(desc),
+ PROGRAM_FINALIZE(p);
+
+ ctx->sh_desc_digest_dma = dma_map_single(jrdev, desc, DESC_BYTES(desc),
DMA_TO_DEVICE);
if (dma_mapping_error(jrdev, ctx->sh_desc_digest_dma)) {
dev_err(jrdev, "unable to map shared descriptor\n");
@@ -415,8 +444,7 @@ static int ahash_set_sh_desc(struct crypto_ahash *ahash)
#ifdef DEBUG
print_hex_dump(KERN_ERR,
"ahash digest shdesc@"__stringify(__LINE__)": ",
- DUMP_PREFIX_ADDRESS, 16, 4, desc,
- desc_bytes(desc), 1);
+ DUMP_PREFIX_ADDRESS, 16, 4, desc, DESC_BYTES(desc), 1);
#endif

return 0;
@@ -425,9 +453,8 @@ static int ahash_set_sh_desc(struct crypto_ahash *ahash)
static int gen_split_hash_key(struct caam_hash_ctx *ctx, const u8 *key_in,
u32 keylen)
{
- return gen_split_key(ctx->jrdev, ctx->key, ctx->split_key_len,
- ctx->split_key_pad_len, key_in, keylen,
- ctx->alg_op);
+ return gen_split_key(ctx->jrdev, ctx->key, ctx->split_key_pad_len,
+ key_in, keylen, ctx->alg_type);
}

/* Digest hash size if it is too large */
@@ -439,6 +466,8 @@ static int hash_digest_key(struct caam_hash_ctx *ctx, const u8 *key_in,
struct split_key_result result;
dma_addr_t src_dma, dst_dma;
int ret = 0;
+ struct program prg;
+ struct program *p = &prg;

desc = kmalloc(CAAM_CMD_SZ * 8 + CAAM_PTR_SZ * 2, GFP_KERNEL | GFP_DMA);
if (!desc) {
@@ -446,7 +475,11 @@ static int hash_digest_key(struct caam_hash_ctx *ctx, const u8 *key_in,
return -ENOMEM;
}

- init_job_desc(desc, 0);
+ PROGRAM_CNTXT_INIT(p, desc, 0);
+ if (ps)
+ PROGRAM_SET_36BIT_ADDR(p);
+
+ JOB_HDR(p, SHR_NEVER, 0, 0, 0);

src_dma = dma_map_single(jrdev, (void *)key_in, *keylen,
DMA_TO_DEVICE);
@@ -465,20 +498,21 @@ static int hash_digest_key(struct caam_hash_ctx *ctx, const u8 *key_in,
}

/* Job descriptor to perform unkeyed hash on key_in */
- append_operation(desc, ctx->alg_type | OP_ALG_ENCRYPT |
- OP_ALG_AS_INITFINAL);
- append_seq_in_ptr(desc, src_dma, *keylen, 0);
- append_seq_fifo_load(desc, *keylen, FIFOLD_CLASS_CLASS2 |
- FIFOLD_TYPE_LAST2 | FIFOLD_TYPE_MSG);
- append_seq_out_ptr(desc, dst_dma, digestsize, 0);
- append_seq_store(desc, digestsize, LDST_CLASS_2_CCB |
- LDST_SRCDST_BYTE_CONTEXT);
+ ALG_OPERATION(p, ctx->alg_type & OP_ALG_ALGSEL_MASK,
+ ctx->alg_type & OP_ALG_AAI_MASK, OP_ALG_AS_INITFINAL,
+ ICV_CHECK_DISABLE, DIR_ENC);
+ SEQINPTR(p, src_dma, *keylen, EXT);
+ SEQFIFOLOAD(p, MSG2, *keylen, LAST2);
+ SEQOUTPTR(p, dst_dma, digestsize, EXT);
+ SEQSTORE(p, CONTEXT2, 0, digestsize, 0);
+
+ PROGRAM_FINALIZE(p);

#ifdef DEBUG
print_hex_dump(KERN_ERR, "key_in@"__stringify(__LINE__)": ",
DUMP_PREFIX_ADDRESS, 16, 4, key_in, *keylen, 1);
print_hex_dump(KERN_ERR, "jobdesc@"__stringify(__LINE__)": ",
- DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc), 1);
+ DUMP_PREFIX_ADDRESS, 16, 4, desc, DESC_BYTES(desc), 1);
#endif

result.err = 0;
@@ -509,8 +543,6 @@ static int hash_digest_key(struct caam_hash_ctx *ctx, const u8 *key_in,
static int ahash_setkey(struct crypto_ahash *ahash,
const u8 *key, unsigned int keylen)
{
- /* Sizes for MDHA pads (*not* keys): MD5, SHA1, 224, 256, 384, 512 */
- static const u8 mdpadlen[] = { 16, 20, 32, 32, 64, 64 };
struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash);
struct device *jrdev = ctx->jrdev;
int blocksize = crypto_tfm_alg_blocksize(&ahash->base);
@@ -534,10 +566,10 @@ static int ahash_setkey(struct crypto_ahash *ahash,
key = hashed_key;
}

- /* Pick class 2 key length from algorithm submask */
- ctx->split_key_len = mdpadlen[(ctx->alg_op & OP_ALG_ALGSEL_SUBMASK) >>
- OP_ALG_ALGSEL_SHIFT] * 2;
- ctx->split_key_pad_len = ALIGN(ctx->split_key_len, 16);
+ /* Compute class 2 key length */
+ ctx->split_key_len = split_key_len(ctx->alg_type & OP_ALG_ALGSEL_MASK);
+ ctx->split_key_pad_len = split_key_pad_len(ctx->alg_type &
+ OP_ALG_ALGSEL_MASK);

#ifdef DEBUG
printk(KERN_ERR "split_key_len %d split_key_pad_len %d\n",
@@ -783,7 +815,9 @@ static int ahash_update_ctx(struct ahash_request *req)
struct ahash_edesc *edesc;
bool chained = false;
int ret = 0;
- int sh_len;
+ unsigned sh_len;
+ struct program prg;
+ struct program *p = &prg;

last_buflen = *next_buflen;
*next_buflen = in_len & (crypto_tfm_alg_blocksize(&ahash->base) - 1);
@@ -838,10 +872,13 @@ static int ahash_update_ctx(struct ahash_request *req)
SEC4_SG_LEN_FIN;
}

- sh_len = desc_len(sh_desc);
+ sh_len = DESC_LEN(sh_desc);
desc = edesc->hw_desc;
- init_job_desc_shared(desc, ptr, sh_len, HDR_SHARE_DEFER |
- HDR_REVERSE);
+ PROGRAM_CNTXT_INIT(p, desc, sh_len);
+ if (ps)
+ PROGRAM_SET_36BIT_ADDR(p);
+
+ JOB_HDR(p, SHR_DEFER, sh_len, ptr, REO | SHR);

edesc->sec4_sg_dma = dma_map_single(jrdev, edesc->sec4_sg,
sec4_sg_bytes,
@@ -851,15 +888,16 @@ static int ahash_update_ctx(struct ahash_request *req)
return -ENOMEM;
}

- append_seq_in_ptr(desc, edesc->sec4_sg_dma, ctx->ctx_len +
- to_hash, LDST_SGF);
+ SEQINPTR(p, edesc->sec4_sg_dma, ctx->ctx_len + to_hash,
+ SGF | EXT);
+ SEQOUTPTR(p, state->ctx_dma, ctx->ctx_len, EXT);

- append_seq_out_ptr(desc, state->ctx_dma, ctx->ctx_len, 0);
+ PROGRAM_FINALIZE(p);

#ifdef DEBUG
print_hex_dump(KERN_ERR, "jobdesc@"__stringify(__LINE__)": ",
DUMP_PREFIX_ADDRESS, 16, 4, desc,
- desc_bytes(desc), 1);
+ DESC_BYTES(desc), 1);
#endif

ret = caam_jr_enqueue(jrdev, desc, ahash_done_bi, req);
@@ -904,7 +942,9 @@ static int ahash_final_ctx(struct ahash_request *req)
int digestsize = crypto_ahash_digestsize(ahash);
struct ahash_edesc *edesc;
int ret = 0;
- int sh_len;
+ unsigned sh_len;
+ struct program prg;
+ struct program *p = &prg;

sec4_sg_bytes = (1 + (buflen ? 1 : 0)) * sizeof(struct sec4_sg_entry);

@@ -916,9 +956,13 @@ static int ahash_final_ctx(struct ahash_request *req)
return -ENOMEM;
}

- sh_len = desc_len(sh_desc);
+ sh_len = DESC_LEN(sh_desc);
desc = edesc->hw_desc;
- init_job_desc_shared(desc, ptr, sh_len, HDR_SHARE_DEFER | HDR_REVERSE);
+ PROGRAM_CNTXT_INIT(p, desc, sh_len);
+ if (ps)
+ PROGRAM_SET_36BIT_ADDR(p);
+
+ JOB_HDR(p, SHR_DEFER, sh_len, ptr, REO | SHR);

edesc->sec4_sg_bytes = sec4_sg_bytes;
edesc->sec4_sg = (void *)edesc + sizeof(struct ahash_edesc) +
@@ -942,19 +986,20 @@ static int ahash_final_ctx(struct ahash_request *req)
return -ENOMEM;
}

- append_seq_in_ptr(desc, edesc->sec4_sg_dma, ctx->ctx_len + buflen,
- LDST_SGF);
+ SEQINPTR(p, edesc->sec4_sg_dma, ctx->ctx_len + buflen, SGF | EXT);

- edesc->dst_dma = map_seq_out_ptr_result(desc, jrdev, req->result,
+ edesc->dst_dma = map_seq_out_ptr_result(p, jrdev, req->result,
digestsize);
if (dma_mapping_error(jrdev, edesc->dst_dma)) {
dev_err(jrdev, "unable to map dst\n");
return -ENOMEM;
}

+ PROGRAM_FINALIZE(p);
+
#ifdef DEBUG
print_hex_dump(KERN_ERR, "jobdesc@"__stringify(__LINE__)": ",
- DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc), 1);
+ DUMP_PREFIX_ADDRESS, 16, 4, desc, DESC_BYTES(desc), 1);
#endif

ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_src, req);
@@ -988,7 +1033,9 @@ static int ahash_finup_ctx(struct ahash_request *req)
struct ahash_edesc *edesc;
bool chained = false;
int ret = 0;
- int sh_len;
+ unsigned sh_len;
+ struct program prg;
+ struct program *p = &prg;

src_nents = __sg_count(req->src, req->nbytes, &chained);
sec4_sg_src_index = 1 + (buflen ? 1 : 0);
@@ -1003,9 +1050,13 @@ static int ahash_finup_ctx(struct ahash_request *req)
return -ENOMEM;
}

- sh_len = desc_len(sh_desc);
+ sh_len = DESC_LEN(sh_desc);
desc = edesc->hw_desc;
- init_job_desc_shared(desc, ptr, sh_len, HDR_SHARE_DEFER | HDR_REVERSE);
+ PROGRAM_CNTXT_INIT(p, desc, sh_len);
+ if (ps)
+ PROGRAM_SET_36BIT_ADDR(p);
+
+ JOB_HDR(p, SHR_DEFER, sh_len, ptr, REO | SHR);

edesc->src_nents = src_nents;
edesc->chained = chained;
@@ -1032,19 +1083,21 @@ static int ahash_finup_ctx(struct ahash_request *req)
return -ENOMEM;
}

- append_seq_in_ptr(desc, edesc->sec4_sg_dma, ctx->ctx_len +
- buflen + req->nbytes, LDST_SGF);
+ SEQINPTR(p, edesc->sec4_sg_dma, ctx->ctx_len + buflen + req->nbytes,
+ SGF | EXT);

- edesc->dst_dma = map_seq_out_ptr_result(desc, jrdev, req->result,
+ edesc->dst_dma = map_seq_out_ptr_result(p, jrdev, req->result,
digestsize);
if (dma_mapping_error(jrdev, edesc->dst_dma)) {
dev_err(jrdev, "unable to map dst\n");
return -ENOMEM;
}

+ PROGRAM_FINALIZE(p);
+
#ifdef DEBUG
print_hex_dump(KERN_ERR, "jobdesc@"__stringify(__LINE__)": ",
- DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc), 1);
+ DUMP_PREFIX_ADDRESS, 16, 4, desc, DESC_BYTES(desc), 1);
#endif

ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_src, req);
@@ -1073,8 +1126,10 @@ static int ahash_digest(struct ahash_request *req)
struct ahash_edesc *edesc;
bool chained = false;
int ret = 0;
- u32 options;
- int sh_len;
+ u32 options = EXT;
+ unsigned sh_len;
+ struct program prg;
+ struct program *p = &prg;

src_nents = sg_count(req->src, req->nbytes, &chained);
dma_map_sg_chained(jrdev, req->src, src_nents ? : 1, DMA_TO_DEVICE,
@@ -1094,9 +1149,13 @@ static int ahash_digest(struct ahash_request *req)
edesc->src_nents = src_nents;
edesc->chained = chained;

- sh_len = desc_len(sh_desc);
+ sh_len = DESC_LEN(sh_desc);
desc = edesc->hw_desc;
- init_job_desc_shared(desc, ptr, sh_len, HDR_SHARE_DEFER | HDR_REVERSE);
+ PROGRAM_CNTXT_INIT(p, desc, sh_len);
+ if (ps)
+ PROGRAM_SET_36BIT_ADDR(p);
+
+ JOB_HDR(p, SHR_DEFER, sh_len, ptr, REO | SHR);

if (src_nents) {
sg_to_sec4_sg_last(req->src, src_nents, edesc->sec4_sg, 0);
@@ -1107,23 +1166,24 @@ static int ahash_digest(struct ahash_request *req)
return -ENOMEM;
}
src_dma = edesc->sec4_sg_dma;
- options = LDST_SGF;
+ options |= SGF;
} else {
src_dma = sg_dma_address(req->src);
- options = 0;
}
- append_seq_in_ptr(desc, src_dma, req->nbytes, options);
+ SEQINPTR(p, src_dma, req->nbytes, options);

- edesc->dst_dma = map_seq_out_ptr_result(desc, jrdev, req->result,
+ edesc->dst_dma = map_seq_out_ptr_result(p, jrdev, req->result,
digestsize);
if (dma_mapping_error(jrdev, edesc->dst_dma)) {
dev_err(jrdev, "unable to map dst\n");
return -ENOMEM;
}

+ PROGRAM_FINALIZE(p);
+
#ifdef DEBUG
print_hex_dump(KERN_ERR, "jobdesc@"__stringify(__LINE__)": ",
- DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc), 1);
+ DUMP_PREFIX_ADDRESS, 16, 4, desc, DESC_BYTES(desc), 1);
#endif

ret = caam_jr_enqueue(jrdev, desc, ahash_done, req);
@@ -1153,7 +1213,9 @@ static int ahash_final_no_ctx(struct ahash_request *req)
int digestsize = crypto_ahash_digestsize(ahash);
struct ahash_edesc *edesc;
int ret = 0;
- int sh_len;
+ unsigned sh_len;
+ struct program prg;
+ struct program *p = &prg;

/* allocate space for base edesc and hw desc commands, link tables */
edesc = kmalloc(sizeof(struct ahash_edesc) + DESC_JOB_IO_LEN,
@@ -1163,9 +1225,13 @@ static int ahash_final_no_ctx(struct ahash_request *req)
return -ENOMEM;
}

- sh_len = desc_len(sh_desc);
+ sh_len = DESC_LEN(sh_desc);
desc = edesc->hw_desc;
- init_job_desc_shared(desc, ptr, sh_len, HDR_SHARE_DEFER | HDR_REVERSE);
+ PROGRAM_CNTXT_INIT(p, desc, sh_len);
+ if (ps)
+ PROGRAM_SET_36BIT_ADDR(p);
+
+ JOB_HDR(p, SHR_DEFER, sh_len, ptr, REO | SHR);

state->buf_dma = dma_map_single(jrdev, buf, buflen, DMA_TO_DEVICE);
if (dma_mapping_error(jrdev, state->buf_dma)) {
@@ -1173,19 +1239,22 @@ static int ahash_final_no_ctx(struct ahash_request *req)
return -ENOMEM;
}

- append_seq_in_ptr(desc, state->buf_dma, buflen, 0);
+ SEQINPTR(p, state->buf_dma, buflen, EXT);

- edesc->dst_dma = map_seq_out_ptr_result(desc, jrdev, req->result,
+ edesc->dst_dma = map_seq_out_ptr_result(p, jrdev, req->result,
digestsize);
if (dma_mapping_error(jrdev, edesc->dst_dma)) {
dev_err(jrdev, "unable to map dst\n");
return -ENOMEM;
}
+
+ PROGRAM_FINALIZE(p);
+
edesc->src_nents = 0;

#ifdef DEBUG
print_hex_dump(KERN_ERR, "jobdesc@"__stringify(__LINE__)": ",
- DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc), 1);
+ DUMP_PREFIX_ADDRESS, 16, 4, desc, DESC_BYTES(desc), 1);
#endif

ret = caam_jr_enqueue(jrdev, desc, ahash_done, req);
@@ -1220,7 +1289,9 @@ static int ahash_update_no_ctx(struct ahash_request *req)
dma_addr_t ptr = ctx->sh_desc_update_first_dma;
bool chained = false;
int ret = 0;
- int sh_len;
+ unsigned sh_len;
+ struct program prg;
+ struct program *p = &prg;

*next_buflen = in_len & (crypto_tfm_alg_blocksize(&ahash->base) - 1);
to_hash = in_len - *next_buflen;
@@ -1260,10 +1331,13 @@ static int ahash_update_no_ctx(struct ahash_request *req)
state->current_buf = !state->current_buf;
}

- sh_len = desc_len(sh_desc);
+ sh_len = DESC_LEN(sh_desc);
desc = edesc->hw_desc;
- init_job_desc_shared(desc, ptr, sh_len, HDR_SHARE_DEFER |
- HDR_REVERSE);
+ PROGRAM_CNTXT_INIT(p, desc, sh_len);
+ if (ps)
+ PROGRAM_SET_36BIT_ADDR(p);
+
+ JOB_HDR(p, SHR_DEFER, sh_len, ptr, REO | SHR);

edesc->sec4_sg_dma = dma_map_single(jrdev, edesc->sec4_sg,
sec4_sg_bytes,
@@ -1273,16 +1347,18 @@ static int ahash_update_no_ctx(struct ahash_request *req)
return -ENOMEM;
}

- append_seq_in_ptr(desc, edesc->sec4_sg_dma, to_hash, LDST_SGF);
+ SEQINPTR(p, edesc->sec4_sg_dma, to_hash, SGF | EXT);

- ret = map_seq_out_ptr_ctx(desc, jrdev, state, ctx->ctx_len);
+ ret = map_seq_out_ptr_ctx(p, jrdev, state, ctx->ctx_len);
if (ret)
return ret;

+ PROGRAM_FINALIZE(p);
+
#ifdef DEBUG
print_hex_dump(KERN_ERR, "jobdesc@"__stringify(__LINE__)": ",
DUMP_PREFIX_ADDRESS, 16, 4, desc,
- desc_bytes(desc), 1);
+ DESC_BYTES(desc), 1);
#endif

ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_dst, req);
@@ -1331,8 +1407,10 @@ static int ahash_finup_no_ctx(struct ahash_request *req)
int digestsize = crypto_ahash_digestsize(ahash);
struct ahash_edesc *edesc;
bool chained = false;
- int sh_len;
+ unsigned sh_len;
int ret = 0;
+ struct program prg;
+ struct program *p = &prg;

src_nents = __sg_count(req->src, req->nbytes, &chained);
sec4_sg_src_index = 2;
@@ -1347,9 +1425,13 @@ static int ahash_finup_no_ctx(struct ahash_request *req)
return -ENOMEM;
}

- sh_len = desc_len(sh_desc);
+ sh_len = DESC_LEN(sh_desc);
desc = edesc->hw_desc;
- init_job_desc_shared(desc, ptr, sh_len, HDR_SHARE_DEFER | HDR_REVERSE);
+ PROGRAM_CNTXT_INIT(p, desc, sh_len);
+ if (ps)
+ PROGRAM_SET_36BIT_ADDR(p);
+
+ JOB_HDR(p, SHR_DEFER, sh_len, ptr, REO | SHR);

edesc->src_nents = src_nents;
edesc->chained = chained;
@@ -1371,19 +1453,20 @@ static int ahash_finup_no_ctx(struct ahash_request *req)
return -ENOMEM;
}

- append_seq_in_ptr(desc, edesc->sec4_sg_dma, buflen +
- req->nbytes, LDST_SGF);
+ SEQINPTR(p, edesc->sec4_sg_dma, buflen + req->nbytes, SGF | EXT);

- edesc->dst_dma = map_seq_out_ptr_result(desc, jrdev, req->result,
+ edesc->dst_dma = map_seq_out_ptr_result(p, jrdev, req->result,
digestsize);
if (dma_mapping_error(jrdev, edesc->dst_dma)) {
dev_err(jrdev, "unable to map dst\n");
return -ENOMEM;
}

+ PROGRAM_FINALIZE(p);
+
#ifdef DEBUG
print_hex_dump(KERN_ERR, "jobdesc@"__stringify(__LINE__)": ",
- DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc), 1);
+ DUMP_PREFIX_ADDRESS, 16, 4, desc, DESC_BYTES(desc), 1);
#endif

ret = caam_jr_enqueue(jrdev, desc, ahash_done, req);
@@ -1414,11 +1497,13 @@ static int ahash_update_first(struct ahash_request *req)
dma_addr_t ptr = ctx->sh_desc_update_first_dma;
int sec4_sg_bytes, src_nents;
dma_addr_t src_dma;
- u32 options;
+ u32 options = EXT;
struct ahash_edesc *edesc;
bool chained = false;
int ret = 0;
- int sh_len;
+ unsigned sh_len;
+ struct program prg;
+ struct program *p = &prg;

*next_buflen = req->nbytes & (crypto_tfm_alg_blocksize(&ahash->base) -
1);
@@ -1462,30 +1547,34 @@ static int ahash_update_first(struct ahash_request *req)
return -ENOMEM;
}
src_dma = edesc->sec4_sg_dma;
- options = LDST_SGF;
+ options |= SGF;
} else {
src_dma = sg_dma_address(req->src);
- options = 0;
}

if (*next_buflen)
sg_copy_part(next_buf, req->src, to_hash, req->nbytes);

- sh_len = desc_len(sh_desc);
+ sh_len = DESC_LEN(sh_desc);
desc = edesc->hw_desc;
- init_job_desc_shared(desc, ptr, sh_len, HDR_SHARE_DEFER |
- HDR_REVERSE);
+ PROGRAM_CNTXT_INIT(p, desc, sh_len);
+ if (ps)
+ PROGRAM_SET_36BIT_ADDR(p);
+
+ JOB_HDR(p, SHR_DEFER, sh_len, ptr, REO | SHR);

- append_seq_in_ptr(desc, src_dma, to_hash, options);
+ SEQINPTR(p, src_dma, to_hash, options);

- ret = map_seq_out_ptr_ctx(desc, jrdev, state, ctx->ctx_len);
+ ret = map_seq_out_ptr_ctx(p, jrdev, state, ctx->ctx_len);
if (ret)
return ret;

+ PROGRAM_FINALIZE(p);
+
#ifdef DEBUG
print_hex_dump(KERN_ERR, "jobdesc@"__stringify(__LINE__)": ",
DUMP_PREFIX_ADDRESS, 16, 4, desc,
- desc_bytes(desc), 1);
+ DESC_BYTES(desc), 1);
#endif

ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_dst,
@@ -1587,7 +1676,6 @@ struct caam_hash_template {
unsigned int blocksize;
struct ahash_alg template_ahash;
u32 alg_type;
- u32 alg_op;
};

/* ahash descriptors */
@@ -1612,7 +1700,6 @@ static struct caam_hash_template driver_hash[] = {
},
},
.alg_type = OP_ALG_ALGSEL_SHA1,
- .alg_op = OP_ALG_ALGSEL_SHA1 | OP_ALG_AAI_HMAC,
}, {
.name = "sha224",
.driver_name = "sha224-caam",
@@ -1633,7 +1720,6 @@ static struct caam_hash_template driver_hash[] = {
},
},
.alg_type = OP_ALG_ALGSEL_SHA224,
- .alg_op = OP_ALG_ALGSEL_SHA224 | OP_ALG_AAI_HMAC,
}, {
.name = "sha256",
.driver_name = "sha256-caam",
@@ -1654,7 +1740,6 @@ static struct caam_hash_template driver_hash[] = {
},
},
.alg_type = OP_ALG_ALGSEL_SHA256,
- .alg_op = OP_ALG_ALGSEL_SHA256 | OP_ALG_AAI_HMAC,
}, {
.name = "sha384",
.driver_name = "sha384-caam",
@@ -1675,7 +1760,6 @@ static struct caam_hash_template driver_hash[] = {
},
},
.alg_type = OP_ALG_ALGSEL_SHA384,
- .alg_op = OP_ALG_ALGSEL_SHA384 | OP_ALG_AAI_HMAC,
}, {
.name = "sha512",
.driver_name = "sha512-caam",
@@ -1696,7 +1780,6 @@ static struct caam_hash_template driver_hash[] = {
},
},
.alg_type = OP_ALG_ALGSEL_SHA512,
- .alg_op = OP_ALG_ALGSEL_SHA512 | OP_ALG_AAI_HMAC,
}, {
.name = "md5",
.driver_name = "md5-caam",
@@ -1717,14 +1800,12 @@ static struct caam_hash_template driver_hash[] = {
},
},
.alg_type = OP_ALG_ALGSEL_MD5,
- .alg_op = OP_ALG_ALGSEL_MD5 | OP_ALG_AAI_HMAC,
},
};

struct caam_hash_alg {
struct list_head entry;
int alg_type;
- int alg_op;
struct ahash_alg ahash_alg;
};

@@ -1759,9 +1840,8 @@ static int caam_hash_cra_init(struct crypto_tfm *tfm)
}
/* copy descriptor header template value */
ctx->alg_type = OP_TYPE_CLASS2_ALG | caam_hash->alg_type;
- ctx->alg_op = OP_TYPE_CLASS2_ALG | caam_hash->alg_op;

- ctx->ctx_len = runninglen[(ctx->alg_op & OP_ALG_ALGSEL_SUBMASK) >>
+ ctx->ctx_len = runninglen[(ctx->alg_type & OP_ALG_ALGSEL_SUBMASK) >>
OP_ALG_ALGSEL_SHIFT];

crypto_ahash_set_reqsize(__crypto_ahash_cast(tfm),
@@ -1779,26 +1859,26 @@ static void caam_hash_cra_exit(struct crypto_tfm *tfm)
if (ctx->sh_desc_update_dma &&
!dma_mapping_error(ctx->jrdev, ctx->sh_desc_update_dma))
dma_unmap_single(ctx->jrdev, ctx->sh_desc_update_dma,
- desc_bytes(ctx->sh_desc_update),
+ DESC_BYTES(ctx->sh_desc_update),
DMA_TO_DEVICE);
if (ctx->sh_desc_update_first_dma &&
!dma_mapping_error(ctx->jrdev, ctx->sh_desc_update_first_dma))
dma_unmap_single(ctx->jrdev, ctx->sh_desc_update_first_dma,
- desc_bytes(ctx->sh_desc_update_first),
+ DESC_BYTES(ctx->sh_desc_update_first),
DMA_TO_DEVICE);
if (ctx->sh_desc_fin_dma &&
!dma_mapping_error(ctx->jrdev, ctx->sh_desc_fin_dma))
dma_unmap_single(ctx->jrdev, ctx->sh_desc_fin_dma,
- desc_bytes(ctx->sh_desc_fin), DMA_TO_DEVICE);
+ DESC_BYTES(ctx->sh_desc_fin), DMA_TO_DEVICE);
if (ctx->sh_desc_digest_dma &&
!dma_mapping_error(ctx->jrdev, ctx->sh_desc_digest_dma))
dma_unmap_single(ctx->jrdev, ctx->sh_desc_digest_dma,
- desc_bytes(ctx->sh_desc_digest),
+ DESC_BYTES(ctx->sh_desc_digest),
DMA_TO_DEVICE);
if (ctx->sh_desc_finup_dma &&
!dma_mapping_error(ctx->jrdev, ctx->sh_desc_finup_dma))
dma_unmap_single(ctx->jrdev, ctx->sh_desc_finup_dma,
- desc_bytes(ctx->sh_desc_finup), DMA_TO_DEVICE);
+ DESC_BYTES(ctx->sh_desc_finup), DMA_TO_DEVICE);

caam_jr_free(ctx->jrdev);
}
@@ -1857,7 +1937,6 @@ caam_hash_alloc(struct caam_hash_template *template,
alg->cra_type = &crypto_ahash_type;

t_alg->alg_type = template->alg_type;
- t_alg->alg_op = template->alg_op;

return t_alg;
}
diff --git a/drivers/crypto/caam/caamrng.c b/drivers/crypto/caam/caamrng.c
index 5b288082e6ac..9bffa6168536 100644
--- a/drivers/crypto/caam/caamrng.c
+++ b/drivers/crypto/caam/caamrng.c
@@ -39,7 +39,7 @@

#include "regs.h"
#include "intern.h"
-#include "desc_constr.h"
+#include "flib/rta.h"
#include "jr.h"
#include "error.h"

@@ -77,6 +77,7 @@ struct caam_rng_ctx {
};

static struct caam_rng_ctx *rng_ctx;
+static const bool ps = (sizeof(dma_addr_t) == sizeof(u64));

static inline void rng_unmap_buf(struct device *jrdev, struct buf_data *bd)
{
@@ -91,7 +92,7 @@ static inline void rng_unmap_ctx(struct caam_rng_ctx *ctx)

if (ctx->sh_desc_dma)
dma_unmap_single(jrdev, ctx->sh_desc_dma,
- desc_bytes(ctx->sh_desc), DMA_TO_DEVICE);
+ DESC_BYTES(ctx->sh_desc), DMA_TO_DEVICE);
rng_unmap_buf(jrdev, &ctx->bufs[0]);
rng_unmap_buf(jrdev, &ctx->bufs[1]);
}
@@ -189,16 +190,24 @@ static inline int rng_create_sh_desc(struct caam_rng_ctx *ctx)
{
struct device *jrdev = ctx->jrdev;
u32 *desc = ctx->sh_desc;
+ struct program prg;
+ struct program *p = &prg;

- init_sh_desc(desc, HDR_SHARE_SERIAL);
+ PROGRAM_CNTXT_INIT(p, desc, 0);
+ if (ps)
+ PROGRAM_SET_36BIT_ADDR(p);
+
+ SHR_HDR(p, SHR_SERIAL, 1, 0);

/* Generate random bytes */
- append_operation(desc, OP_ALG_ALGSEL_RNG | OP_TYPE_CLASS1_ALG);
+ ALG_OPERATION(p, OP_ALG_ALGSEL_RNG, OP_ALG_AAI_RNG, 0, 0, 0);

/* Store bytes */
- append_seq_fifo_store(desc, RN_BUF_SIZE, FIFOST_TYPE_RNGSTORE);
+ SEQFIFOSTORE(p, RNG, 0, RN_BUF_SIZE, 0);
+
+ PROGRAM_FINALIZE(p);

- ctx->sh_desc_dma = dma_map_single(jrdev, desc, desc_bytes(desc),
+ ctx->sh_desc_dma = dma_map_single(jrdev, desc, DESC_BYTES(desc),
DMA_TO_DEVICE);
if (dma_mapping_error(jrdev, ctx->sh_desc_dma)) {
dev_err(jrdev, "unable to map shared descriptor\n");
@@ -206,7 +215,7 @@ static inline int rng_create_sh_desc(struct caam_rng_ctx *ctx)
}
#ifdef DEBUG
print_hex_dump(KERN_ERR, "rng shdesc@: ", DUMP_PREFIX_ADDRESS, 16, 4,
- desc, desc_bytes(desc), 1);
+ desc, DESC_BYTES(desc), 1);
#endif
return 0;
}
@@ -216,10 +225,15 @@ static inline int rng_create_job_desc(struct caam_rng_ctx *ctx, int buf_id)
struct device *jrdev = ctx->jrdev;
struct buf_data *bd = &ctx->bufs[buf_id];
u32 *desc = bd->hw_desc;
- int sh_len = desc_len(ctx->sh_desc);
+ unsigned sh_len = DESC_LEN(ctx->sh_desc);
+ struct program prg;
+ struct program *p = &prg;
+
+ PROGRAM_CNTXT_INIT(p, desc, sh_len);
+ if (ps)
+ PROGRAM_SET_36BIT_ADDR(p);

- init_job_desc_shared(desc, ctx->sh_desc_dma, sh_len, HDR_SHARE_DEFER |
- HDR_REVERSE);
+ JOB_HDR(p, SHR_DEFER, sh_len, ctx->sh_desc_dma, REO | SHR);

bd->addr = dma_map_single(jrdev, bd->buf, RN_BUF_SIZE, DMA_FROM_DEVICE);
if (dma_mapping_error(jrdev, bd->addr)) {
@@ -227,10 +241,13 @@ static inline int rng_create_job_desc(struct caam_rng_ctx *ctx, int buf_id)
return -ENOMEM;
}

- append_seq_out_ptr_intlen(desc, bd->addr, RN_BUF_SIZE, 0);
+ SEQOUTPTR(p, bd->addr, RN_BUF_SIZE, 0);
+
+ PROGRAM_FINALIZE(p);
+
#ifdef DEBUG
print_hex_dump(KERN_ERR, "rng job desc@: ", DUMP_PREFIX_ADDRESS, 16, 4,
- desc, desc_bytes(desc), 1);
+ desc, DESC_BYTES(desc), 1);
#endif
return 0;
}
diff --git a/drivers/crypto/caam/ctrl.c b/drivers/crypto/caam/ctrl.c
index 69736b6f07ae..ead1041d20c1 100644
--- a/drivers/crypto/caam/ctrl.c
+++ b/drivers/crypto/caam/ctrl.c
@@ -13,25 +13,35 @@
#include "regs.h"
#include "intern.h"
#include "jr.h"
-#include "desc_constr.h"
+#include "flib/rta.h"
#include "error.h"
#include "ctrl.h"

+enum rta_sec_era rta_sec_era;
+EXPORT_SYMBOL(rta_sec_era);
+
+static const bool ps = (sizeof(dma_addr_t) == sizeof(u64));
+
/*
* Descriptor to instantiate RNG State Handle 0 in normal mode and
* load the JDKEK, TDKEK and TDSK registers
*/
static void build_instantiation_desc(u32 *desc, int handle, int do_sk)
{
- u32 *jump_cmd, op_flags;
-
- init_job_desc(desc, 0);
+ struct program prg;
+ struct program *p = &prg;
+ LABEL(jump_cmd);
+ REFERENCE(pjump_cmd);

- op_flags = OP_TYPE_CLASS1_ALG | OP_ALG_ALGSEL_RNG |
- (handle << OP_ALG_AAI_SHIFT) | OP_ALG_AS_INIT;
+ PROGRAM_CNTXT_INIT(p, desc, 0);
+ if (ps)
+ PROGRAM_SET_36BIT_ADDR(p);

/* INIT RNG in non-test mode */
- append_operation(desc, op_flags);
+ ALG_OPERATION(p, OP_ALG_ALGSEL_RNG,
+ (u16)(OP_ALG_AAI_RNG |
+ (handle << OP_ALG_AAI_RNG4_SH_SHIFT)),
+ OP_ALG_AS_INIT, 0, 0);

if (!handle && do_sk) {
/*
@@ -39,33 +49,48 @@ static void build_instantiation_desc(u32 *desc, int handle, int do_sk)
*/

/* wait for done */
- jump_cmd = append_jump(desc, JUMP_CLASS_CLASS1);
- set_jump_tgt_here(desc, jump_cmd);
+ pjump_cmd = JUMP(p, jump_cmd, LOCAL_JUMP, ALL_TRUE, CLASS1);
+ SET_LABEL(p, jump_cmd);

/*
* load 1 to clear written reg:
* resets the done interrrupt and returns the RNG to idle.
*/
- append_load_imm_u32(desc, 1, LDST_SRCDST_WORD_CLRW);
+ LOAD(p, CLRW_CLR_C1MODE, CLRW, 0, CAAM_CMD_SZ, IMMED);

/* Initialize State Handle */
- append_operation(desc, OP_TYPE_CLASS1_ALG | OP_ALG_ALGSEL_RNG |
- OP_ALG_AAI_RNG4_SK);
+ ALG_OPERATION(p, OP_ALG_ALGSEL_RNG, OP_ALG_AAI_RNG4_SK,
+ OP_ALG_AS_UPDATE, 0, 0);
}

- append_jump(desc, JUMP_CLASS_CLASS1 | JUMP_TYPE_HALT);
+ JUMP(p, 0, HALT, ALL_TRUE, CLASS1 | IMMED);
+
+ PATCH_JUMP(p, pjump_cmd, jump_cmd);
+
+ PROGRAM_FINALIZE(p);
}

/* Descriptor for deinstantiation of State Handle 0 of the RNG block. */
static void build_deinstantiation_desc(u32 *desc, int handle)
{
- init_job_desc(desc, 0);
+ struct program prg;
+ struct program *p = &prg;
+
+ PROGRAM_CNTXT_INIT(p, desc, 0);
+ if (ps)
+ PROGRAM_SET_36BIT_ADDR(p);
+
+ JOB_HDR(p, SHR_NEVER, 1, 0, 0);

/* Uninstantiate State Handle 0 */
- append_operation(desc, OP_TYPE_CLASS1_ALG | OP_ALG_ALGSEL_RNG |
- (handle << OP_ALG_AAI_SHIFT) | OP_ALG_AS_INITFINAL);
+ ALG_OPERATION(p, OP_ALG_ALGSEL_RNG,
+ (u16)(OP_ALG_AAI_RNG |
+ (handle << OP_ALG_AAI_RNG4_SH_SHIFT)),
+ OP_ALG_AS_INITFINAL, 0, 0);
+
+ JUMP(p, 0, HALT, ALL_TRUE, CLASS1 | IMMED);

- append_jump(desc, JUMP_CLASS_CLASS1 | JUMP_TYPE_HALT);
+ PROGRAM_FINALIZE(p);
}

/*
@@ -112,7 +137,7 @@ static inline int run_descriptor_deco0(struct device *ctrldev, u32 *desc,
return -ENODEV;
}

- for (i = 0; i < desc_len(desc); i++)
+ for (i = 0; i < DESC_LEN(desc); i++)
wr_reg32(&topregs->deco.descbuf[i], *(desc + i));

flags = DECO_JQCR_WHL;
@@ -120,7 +145,7 @@ static inline int run_descriptor_deco0(struct device *ctrldev, u32 *desc,
* If the descriptor length is longer than 4 words, then the
* FOUR bit in JRCTRL register must be set.
*/
- if (desc_len(desc) >= 4)
+ if (DESC_LEN(desc) >= 4)
flags |= DECO_JQCR_FOUR;

/* Instruct the DECO to execute it */
@@ -365,8 +390,9 @@ static void kick_trng(struct platform_device *pdev, int ent_delay)
/**
* caam_get_era() - Return the ERA of the SEC on SoC, based
* on "sec-era" propery in the DTS. This property is updated by u-boot.
+ * Returns the ERA number or -ENOTSUPP if the ERA is unknown.
**/
-int caam_get_era(void)
+static int caam_get_era(void)
{
struct device_node *caam_node;

@@ -381,7 +407,6 @@ int caam_get_era(void)

return -ENOTSUPP;
}
-EXPORT_SYMBOL(caam_get_era);

/* Probe routine for CAAM top (controller) level */
static int caam_probe(struct platform_device *pdev)
@@ -429,7 +454,7 @@ static int caam_probe(struct platform_device *pdev)
* long pointers in master configuration register
*/
setbits32(&topregs->ctrl.mcr, MCFGR_WDENABLE |
- (sizeof(dma_addr_t) == sizeof(u64) ? MCFGR_LONG_PTR : 0));
+ (ps ? MCFGR_LONG_PTR : 0));

/*
* Read the Compile Time paramters and SCFGR to determine
@@ -458,7 +483,7 @@ static int caam_probe(struct platform_device *pdev)
JRSTART_JR1_START | JRSTART_JR2_START |
JRSTART_JR3_START);

- if (sizeof(dma_addr_t) == sizeof(u64))
+ if (ps)
if (of_device_is_compatible(nprop, "fsl,sec-v5.0"))
dma_set_mask_and_coherent(dev, DMA_BIT_MASK(40));
else
@@ -582,8 +607,16 @@ static int caam_probe(struct platform_device *pdev)
(u64)rd_reg32(&topregs->ctrl.perfmon.caam_id_ls);

/* Report "alive" for developer to see */
- dev_info(dev, "device ID = 0x%016llx (Era %d)\n", caam_id,
- caam_get_era());
+ dev_info(dev, "device ID = 0x%016llx\n", caam_id);
+ ret = caam_get_era();
+ if (ret >= 0) {
+ dev_info(dev, "Era %d\n", ret);
+ rta_set_sec_era(INTL_SEC_ERA(ret));
+ } else {
+ dev_warn(dev, "Era property not found! Defaulting to era %d\n",
+ USER_SEC_ERA(DEFAULT_SEC_ERA));
+ rta_set_sec_era(DEFAULT_SEC_ERA);
+ }
dev_info(dev, "job rings = %d, qi = %d\n",
ctrlpriv->total_jobrs, ctrlpriv->qi_present);

diff --git a/drivers/crypto/caam/ctrl.h b/drivers/crypto/caam/ctrl.h
index cac5402a46eb..93680a9290db 100644
--- a/drivers/crypto/caam/ctrl.h
+++ b/drivers/crypto/caam/ctrl.h
@@ -8,6 +8,6 @@
#define CTRL_H

/* Prototypes for backend-level services exposed to APIs */
-int caam_get_era(void);
+extern enum rta_sec_era rta_sec_era;

#endif /* CTRL_H */
diff --git a/drivers/crypto/caam/key_gen.c b/drivers/crypto/caam/key_gen.c
index 871703c49d2c..e59e3d2d3b7c 100644
--- a/drivers/crypto/caam/key_gen.c
+++ b/drivers/crypto/caam/key_gen.c
@@ -7,7 +7,7 @@
#include "compat.h"
#include "jr.h"
#include "error.h"
-#include "desc_constr.h"
+#include "flib/desc/jobdesc.h"
#include "key_gen.h"

void split_key_done(struct device *dev, u32 *desc, u32 err,
@@ -41,14 +41,14 @@ Split key generation-----------------------------------------------
[06] 0x64260028 fifostr: class2 mdsplit-jdk len=40
@0xffe04000
*/
-int gen_split_key(struct device *jrdev, u8 *key_out, int split_key_len,
- int split_key_pad_len, const u8 *key_in, u32 keylen,
- u32 alg_op)
+int gen_split_key(struct device *jrdev, u8 *key_out, int split_key_pad_len,
+ const u8 *key_in, u32 keylen, u32 alg_op)
{
u32 *desc;
struct split_key_result result;
dma_addr_t dma_addr_in, dma_addr_out;
int ret = 0;
+ static const bool ps = (sizeof(dma_addr_t) == sizeof(u64));

desc = kmalloc(CAAM_CMD_SZ * 6 + CAAM_PTR_SZ * 2, GFP_KERNEL | GFP_DMA);
if (!desc) {
@@ -56,8 +56,6 @@ int gen_split_key(struct device *jrdev, u8 *key_out, int split_key_len,
return -ENOMEM;
}

- init_job_desc(desc, 0);
-
dma_addr_in = dma_map_single(jrdev, (void *)key_in, keylen,
DMA_TO_DEVICE);
if (dma_mapping_error(jrdev, dma_addr_in)) {
@@ -65,22 +63,7 @@ int gen_split_key(struct device *jrdev, u8 *key_out, int split_key_len,
kfree(desc);
return -ENOMEM;
}
- append_key(desc, dma_addr_in, keylen, CLASS_2 | KEY_DEST_CLASS_REG);
-
- /* Sets MDHA up into an HMAC-INIT */
- append_operation(desc, alg_op | OP_ALG_DECRYPT | OP_ALG_AS_INIT);
-
- /*
- * do a FIFO_LOAD of zero, this will trigger the internal key expansion
- * into both pads inside MDHA
- */
- append_fifo_load_as_imm(desc, NULL, 0, LDST_CLASS_2_CCB |
- FIFOLD_TYPE_MSG | FIFOLD_TYPE_LAST2);
-
- /*
- * FIFO_STORE with the explicit split-key content store
- * (0x26 output type)
- */
+
dma_addr_out = dma_map_single(jrdev, key_out, split_key_pad_len,
DMA_FROM_DEVICE);
if (dma_mapping_error(jrdev, dma_addr_out)) {
@@ -88,14 +71,16 @@ int gen_split_key(struct device *jrdev, u8 *key_out, int split_key_len,
kfree(desc);
return -ENOMEM;
}
- append_fifo_store(desc, dma_addr_out, split_key_len,
- LDST_CLASS_2_CCB | FIFOST_TYPE_SPLIT_KEK);
+
+ /* keylen is expected to be less or equal block size (which is <=64) */
+ cnstr_jobdesc_mdsplitkey(desc, ps, dma_addr_in, (u8)keylen,
+ alg_op & OP_ALG_ALGSEL_MASK, dma_addr_out);

#ifdef DEBUG
print_hex_dump(KERN_ERR, "ctx.key@"__stringify(__LINE__)": ",
DUMP_PREFIX_ADDRESS, 16, 4, key_in, keylen, 1);
print_hex_dump(KERN_ERR, "jobdesc@"__stringify(__LINE__)": ",
- DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc), 1);
+ DUMP_PREFIX_ADDRESS, 16, 4, desc, DESC_BYTES(desc), 1);
#endif

result.err = 0;
diff --git a/drivers/crypto/caam/key_gen.h b/drivers/crypto/caam/key_gen.h
index c5588f6d8109..170d3672288b 100644
--- a/drivers/crypto/caam/key_gen.h
+++ b/drivers/crypto/caam/key_gen.h
@@ -12,6 +12,5 @@ struct split_key_result {

void split_key_done(struct device *dev, u32 *desc, u32 err, void *context);

-int gen_split_key(struct device *jrdev, u8 *key_out, int split_key_len,
- int split_key_pad_len, const u8 *key_in, u32 keylen,
- u32 alg_op);
+int gen_split_key(struct device *jrdev, u8 *key_out, int split_key_pad_len,
+ const u8 *key_in, u32 keylen, u32 alg_op);
--
1.8.3.1
Horia Geanta
2014-08-14 12:54:33 UTC
Permalink
aead shared descriptors are moved from caamalg in RTA library
(ipsec.h), since in this way they can be shared with other applications.

ablkcipher encrypt / decrypt shared descriptors are refactored into
a single descriptor and moved in RTA (algo.h) for the same reason.

Other descriptors (for e.g. from caamhash) are left as is,
since they are not general purpose.

Signed-off-by: Horia Geanta <***@freescale.com>
---
drivers/crypto/caam/caamalg.c | 592 +++++-----------------------------
drivers/crypto/caam/flib/desc/algo.h | 88 +++++
drivers/crypto/caam/flib/desc/ipsec.h | 550 +++++++++++++++++++++++++++++++
3 files changed, 720 insertions(+), 510 deletions(-)
create mode 100644 drivers/crypto/caam/flib/desc/algo.h
create mode 100644 drivers/crypto/caam/flib/desc/ipsec.h

diff --git a/drivers/crypto/caam/caamalg.c b/drivers/crypto/caam/caamalg.c
index 9090fc8c04e0..746bb0b21695 100644
--- a/drivers/crypto/caam/caamalg.c
+++ b/drivers/crypto/caam/caamalg.c
@@ -50,6 +50,8 @@
#include "intern.h"
#include "flib/rta.h"
#include "flib/desc/common.h"
+#include "flib/desc/algo.h"
+#include "flib/desc/ipsec.h"
#include "jr.h"
#include "error.h"
#include "sg_sw_sec4.h"
@@ -65,22 +67,7 @@
/* max IV is max of AES_BLOCK_SIZE, DES3_EDE_BLOCK_SIZE */
#define CAAM_MAX_IV_LENGTH 16

-/* length of descriptors text */
-#define DESC_AEAD_BASE (4 * CAAM_CMD_SZ)
-#define DESC_AEAD_ENC_LEN (DESC_AEAD_BASE + 15 * CAAM_CMD_SZ)
-#define DESC_AEAD_DEC_LEN (DESC_AEAD_BASE + 18 * CAAM_CMD_SZ)
-#define DESC_AEAD_GIVENC_LEN (DESC_AEAD_ENC_LEN + 7 * CAAM_CMD_SZ)
-
-#define DESC_AEAD_NULL_BASE (3 * CAAM_CMD_SZ)
-#define DESC_AEAD_NULL_ENC_LEN (DESC_AEAD_NULL_BASE + 14 * CAAM_CMD_SZ)
-#define DESC_AEAD_NULL_DEC_LEN (DESC_AEAD_NULL_BASE + 17 * CAAM_CMD_SZ)
-
-#define DESC_ABLKCIPHER_BASE (3 * CAAM_CMD_SZ)
-#define DESC_ABLKCIPHER_ENC_LEN (DESC_ABLKCIPHER_BASE + \
- 20 * CAAM_CMD_SZ)
-#define DESC_ABLKCIPHER_DEC_LEN (DESC_ABLKCIPHER_BASE + \
- 15 * CAAM_CMD_SZ)
-
+/* maximum length of descriptors text */
#define DESC_MAX_USED_BYTES (DESC_AEAD_GIVENC_LEN + \
CAAM_MAX_KEY_SIZE)
#define DESC_MAX_USED_LEN (DESC_MAX_USED_BYTES / CAAM_CMD_SZ)
@@ -124,96 +111,33 @@ struct caam_ctx {

static int aead_null_set_sh_desc(struct crypto_aead *aead)
{
- struct aead_tfm *tfm = &aead->base.crt_aead;
struct caam_ctx *ctx = crypto_aead_ctx(aead);
struct device *jrdev = ctx->jrdev;
- bool keys_fit_inline = false;
u32 *desc;
- struct program prg;
- struct program *p = &prg;
unsigned desc_bytes;
- LABEL(skip_key_load);
- REFERENCE(pskip_key_load);
- LABEL(nop_cmd);
- REFERENCE(pnop_cmd);
- LABEL(read_move_cmd);
- REFERENCE(pread_move_cmd);
- LABEL(write_move_cmd);
- REFERENCE(pwrite_move_cmd);
+ struct alginfo authdata;
+ int rem_bytes = CAAM_DESC_BYTES_MAX - (DESC_JOB_IO_LEN +
+ ctx->split_key_pad_len);
+
+ authdata.algtype = ctx->class2_alg_type;
+ authdata.key_enc_flags = ENC;
+ authdata.keylen = ctx->split_key_len;

/*
* Job Descriptor and Shared Descriptors
* must all fit into the 64-word Descriptor h/w Buffer
*/
- if (DESC_AEAD_NULL_ENC_LEN + DESC_JOB_IO_LEN +
- ctx->split_key_pad_len <= CAAM_DESC_BYTES_MAX)
- keys_fit_inline = true;
+ if (rem_bytes >= DESC_AEAD_NULL_ENC_LEN) {
+ authdata.key = (uintptr_t)ctx->key;
+ authdata.key_type = RTA_DATA_IMM;
+ } else {
+ authdata.key = ctx->key_dma;
+ authdata.key_type = RTA_DATA_PTR;
+ }

/* aead_encrypt shared descriptor */
desc = ctx->sh_desc_enc;
- PROGRAM_CNTXT_INIT(p, desc, 0);
- if (ps)
- PROGRAM_SET_36BIT_ADDR(p);
-
- SHR_HDR(p, SHR_SERIAL, 1, 0);
-
- /* Skip if already shared */
- pskip_key_load = JUMP(p, skip_key_load, LOCAL_JUMP, ALL_TRUE, SHRD);
- if (keys_fit_inline)
- KEY(p, MDHA_SPLIT_KEY, ENC, (uintptr_t)ctx->key,
- ctx->split_key_len, IMMED | COPY);
- else
- KEY(p, MDHA_SPLIT_KEY, ENC, ctx->key_dma, ctx->split_key_len,
- 0);
- SET_LABEL(p, skip_key_load);
-
- /* cryptlen = seqoutlen - authsize */
- MATHB(p, SEQOUTSZ, SUB, ctx->authsize, MATH3, CAAM_CMD_SZ, IMMED2);
-
- /*
- * NULL encryption; IV is zero
- * assoclen = (assoclen + cryptlen) - cryptlen
- */
- MATHB(p, SEQINSZ, SUB, MATH3, VSEQINSZ, CAAM_CMD_SZ, 0);
-
- /* read assoc before reading payload */
- SEQFIFOLOAD(p, MSG2, 0 , VLF);
-
- /* Prepare to read and write cryptlen bytes */
- MATHB(p, ZERO, ADD, MATH3, VSEQINSZ, CAAM_CMD_SZ, 0);
- MATHB(p, ZERO, ADD, MATH3, VSEQOUTSZ, CAAM_CMD_SZ, 0);
-
- /*
- * MOVE_LEN opcode is not available in all SEC HW revisions,
- * thus need to do some magic, i.e. self-patch the descriptor
- * buffer.
- */
- pread_move_cmd = MOVE(p, DESCBUF, 0, MATH3, 0, 6, IMMED);
- pwrite_move_cmd = MOVE(p, MATH3, 0, DESCBUF, 0, 8, WAITCOMP | IMMED);
-
- /* Class 2 operation */
- ALG_OPERATION(p, ctx->class2_alg_type & OP_ALG_ALGSEL_MASK,
- ctx->class2_alg_type & OP_ALG_AAI_MASK,
- OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, DIR_ENC);
-
- /* Read and write cryptlen bytes */
- SEQFIFOSTORE(p, MSG, 0, 0, VLF);
- SEQFIFOLOAD(p, MSGINSNOOP, 0, VLF | LAST1 | LAST2 | FLUSH1);
-
- SET_LABEL(p, read_move_cmd);
- SET_LABEL(p, write_move_cmd);
- LOAD(p, 0, DCTRL, LDOFF_DISABLE_AUTO_NFIFO, 0, IMMED);
- MOVE(p, IFIFOAB1, 0, OFIFO, 0, 0, IMMED);
-
- /* Write ICV */
- SEQSTORE(p, CONTEXT2, 0, ctx->authsize, 0);
-
- PATCH_JUMP(p, pskip_key_load, skip_key_load);
- PATCH_MOVE(p, pread_move_cmd, read_move_cmd);
- PATCH_MOVE(p, pwrite_move_cmd, write_move_cmd);
-
- PROGRAM_FINALIZE(p);
-
+ cnstr_shdsc_aead_null_encap(desc, ps, &authdata, ctx->authsize);
desc_bytes = DESC_BYTES(desc);
ctx->sh_desc_enc_dma = dma_map_single(jrdev, desc, desc_bytes,
DMA_TO_DEVICE);
@@ -231,83 +155,17 @@ static int aead_null_set_sh_desc(struct crypto_aead *aead)
* Job Descriptor and Shared Descriptors
* must all fit into the 64-word Descriptor h/w Buffer
*/
- keys_fit_inline = false;
- if (DESC_AEAD_NULL_DEC_LEN + DESC_JOB_IO_LEN +
- ctx->split_key_pad_len <= CAAM_DESC_BYTES_MAX)
- keys_fit_inline = true;
+ if (rem_bytes >= DESC_AEAD_NULL_DEC_LEN) {
+ authdata.key = (uintptr_t)ctx->key;
+ authdata.key_type = RTA_DATA_IMM;
+ } else {
+ authdata.key = ctx->key_dma;
+ authdata.key_type = RTA_DATA_PTR;
+ }

/* aead_decrypt shared descriptor */
desc = ctx->sh_desc_dec;
- PROGRAM_CNTXT_INIT(p, desc, 0);
- if (ps)
- PROGRAM_SET_36BIT_ADDR(p);
-
- SHR_HDR(p, SHR_SERIAL, 1, 0);
-
- /* Skip if already shared */
- pskip_key_load = JUMP(p, skip_key_load, LOCAL_JUMP, ALL_TRUE, SHRD);
- if (keys_fit_inline)
- KEY(p, MDHA_SPLIT_KEY, ENC, (uintptr_t)ctx->key,
- ctx->split_key_len, IMMED | COPY);
- else
- KEY(p, MDHA_SPLIT_KEY, ENC, ctx->key_dma, ctx->split_key_len,
- 0);
- SET_LABEL(p, skip_key_load);
-
- /* Class 2 operation */
- ALG_OPERATION(p, ctx->class2_alg_type & OP_ALG_ALGSEL_MASK,
- ctx->class2_alg_type & OP_ALG_AAI_MASK,
- OP_ALG_AS_INITFINAL, ICV_CHECK_ENABLE, DIR_DEC);
-
- /* assoclen + cryptlen = seqinlen - ivsize - authsize */
- MATHB(p, SEQINSZ, SUB, ctx->authsize + tfm->ivsize, MATH3, CAAM_CMD_SZ,
- IMMED2);
- /* assoclen = (assoclen + cryptlen) - cryptlen */
- MATHB(p, SEQOUTSZ, SUB, MATH0, MATH2, CAAM_CMD_SZ, 0);
- MATHB(p, MATH3, SUB, MATH2, VSEQINSZ, CAAM_CMD_SZ, 0);
-
- /* read assoc before reading payload */
- SEQFIFOLOAD(p, MSG2, 0 , VLF);
-
- /* Prepare to read and write cryptlen bytes */
- MATHB(p, ZERO, ADD, MATH2, VSEQINSZ, CAAM_CMD_SZ, 0);
- MATHB(p, ZERO, ADD, MATH2, VSEQOUTSZ, CAAM_CMD_SZ, 0);
-
- /*
- * MOVE_LEN opcode is not available in all SEC HW revisions,
- * thus need to do some magic, i.e. self-patch the descriptor
- * buffer.
- */
- pread_move_cmd = MOVE(p, DESCBUF, 0, MATH2, 0, 6, IMMED);
- pwrite_move_cmd = MOVE(p, MATH2, 0, DESCBUF, 0, 8, WAITCOMP | IMMED);
-
- /* Read and write cryptlen bytes */
- SEQFIFOSTORE(p, MSG, 0, 0, VLF);
- SEQFIFOLOAD(p, MSGINSNOOP, 0, VLF | LAST1 | LAST2 | FLUSH1);
-
- /*
- * Insert a NOP here, since we need at least 4 instructions between
- * code patching the descriptor buffer and the location being patched.
- */
- pnop_cmd = JUMP(p, nop_cmd, LOCAL_JUMP, ALL_TRUE, 0);
- SET_LABEL(p, nop_cmd);
-
- SET_LABEL(p, read_move_cmd);
- SET_LABEL(p, write_move_cmd);
- LOAD(p, 0, DCTRL, LDOFF_DISABLE_AUTO_NFIFO, 0, IMMED);
- MOVE(p, IFIFOAB1, 0, OFIFO, 0, 0, IMMED);
- LOAD(p, 0, DCTRL, LDOFF_ENABLE_AUTO_NFIFO, 0, IMMED);
-
- /* Load ICV */
- SEQFIFOLOAD(p, ICV2, ctx->authsize, LAST2);
-
- PATCH_JUMP(p, pskip_key_load, skip_key_load);
- PATCH_JUMP(p, pnop_cmd, nop_cmd);
- PATCH_MOVE(p, pread_move_cmd, read_move_cmd);
- PATCH_MOVE(p, pwrite_move_cmd, write_move_cmd);
-
- PROGRAM_FINALIZE(p);
-
+ cnstr_shdsc_aead_null_decap(desc, ps, &authdata, ctx->authsize);
desc_bytes = DESC_BYTES(desc);
ctx->sh_desc_dec_dma = dma_map_single(jrdev, desc, desc_bytes,
DMA_TO_DEVICE);
@@ -329,18 +187,10 @@ static int aead_set_sh_desc(struct crypto_aead *aead)
struct aead_tfm *tfm = &aead->base.crt_aead;
struct caam_ctx *ctx = crypto_aead_ctx(aead);
struct device *jrdev = ctx->jrdev;
- bool keys_fit_inline = false;
- u32 geniv, moveiv;
u32 *desc;
- struct program prg;
- struct program *p = &prg;
unsigned desc_bytes;
- LABEL(skip_key_load);
- REFERENCE(pskip_key_load);
- LABEL(set_dk);
- REFERENCE(pset_dk);
- LABEL(skip_dk);
- REFERENCE(pskip_dk);
+ struct alginfo cipherdata, authdata;
+ int rem_bytes;

if (!ctx->authsize)
return 0;
@@ -349,81 +199,34 @@ static int aead_set_sh_desc(struct crypto_aead *aead)
if (!ctx->enckeylen)
return aead_null_set_sh_desc(aead);

+ cipherdata.algtype = ctx->class1_alg_type;
+ cipherdata.key_enc_flags = 0;
+ cipherdata.keylen = ctx->enckeylen;
+ authdata.algtype = ctx->class2_alg_type;
+ authdata.key_enc_flags = ENC;
+ authdata.keylen = ctx->split_key_len;
+
+ rem_bytes = CAAM_DESC_BYTES_MAX - (DESC_JOB_IO_LEN +
+ ctx->split_key_pad_len + ctx->enckeylen);
+
/*
* Job Descriptor and Shared Descriptors
* must all fit into the 64-word Descriptor h/w Buffer
*/
- if (DESC_AEAD_ENC_LEN + DESC_JOB_IO_LEN +
- ctx->split_key_pad_len + ctx->enckeylen <=
- CAAM_DESC_BYTES_MAX)
- keys_fit_inline = true;
-
- /* aead_encrypt shared descriptor */
- desc = ctx->sh_desc_enc;
- PROGRAM_CNTXT_INIT(p, desc, 0);
- if (ps)
- PROGRAM_SET_36BIT_ADDR(p);
-
- SHR_HDR(p, SHR_SERIAL, 1, 0);
-
- /* Skip key loading if already shared */
- pskip_key_load = JUMP(p, skip_key_load, LOCAL_JUMP, ALL_TRUE, SHRD);
-
- if (keys_fit_inline) {
- KEY(p, MDHA_SPLIT_KEY, ENC, (uintptr_t)ctx->key,
- ctx->split_key_len, IMMED | COPY);
- KEY(p, KEY1, 0, (uintptr_t)(ctx->key + ctx->split_key_pad_len),
- ctx->enckeylen, IMMED | COPY);
+ if (rem_bytes >= DESC_AEAD_ENC_LEN) {
+ authdata.key = (uintptr_t)ctx->key;
+ authdata.key_type = RTA_DATA_IMM;
} else {
- KEY(p, MDHA_SPLIT_KEY, ENC, ctx->key_dma, ctx->split_key_len,
- 0);
- KEY(p, KEY1, 0, ctx->key_dma + ctx->split_key_pad_len,
- ctx->enckeylen, 0);
+ authdata.key = ctx->key_dma;
+ authdata.key_type = RTA_DATA_PTR;
}
+ cipherdata.key = authdata.key + ctx->split_key_pad_len;
+ cipherdata.key_type = authdata.key_type;

- SET_LABEL(p, skip_key_load);
-
- /* Class 2 operation */
- ALG_OPERATION(p, ctx->class2_alg_type & OP_ALG_ALGSEL_MASK,
- ctx->class2_alg_type & OP_ALG_AAI_MASK,
- OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, DIR_ENC);
-
- /* cryptlen = seqoutlen - authsize */
- MATHB(p, SEQOUTSZ, SUB, ctx->authsize, MATH3, CAAM_CMD_SZ, IMMED2);
-
- /* assoclen + cryptlen = seqinlen - ivsize */
- MATHB(p, SEQINSZ, SUB, tfm->ivsize, MATH2, CAAM_CMD_SZ, IMMED2);
-
- /* assoclen = (assoclen + cryptlen) - cryptlen */
- MATHB(p, MATH2, SUB, MATH3, VSEQINSZ, CAAM_CMD_SZ, 0);
-
- /* read assoc before reading payload */
- SEQFIFOLOAD(p, MSG2, 0 , VLF);
-
- /* read iv for both classes */
- SEQLOAD(p, CONTEXT1, 0, tfm->ivsize, 0);
- MOVE(p, CONTEXT1, 0, IFIFOAB2, 0, tfm->ivsize, IMMED);
-
- /* Class 1 operation */
- ALG_OPERATION(p, ctx->class1_alg_type & OP_ALG_ALGSEL_MASK,
- ctx->class1_alg_type & OP_ALG_AAI_MASK,
- OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, DIR_ENC);
-
- /* Read and write cryptlen bytes */
- MATHB(p, ZERO, ADD, MATH3, VSEQINSZ, CAAM_CMD_SZ, 0);
- MATHB(p, ZERO, ADD, MATH3, VSEQOUTSZ, CAAM_CMD_SZ, 0);
-
- /* Read and write payload */
- SEQFIFOSTORE(p, MSG, 0, 0, VLF);
- SEQFIFOLOAD(p, MSGOUTSNOOP, 0, VLF | LAST1 | LAST2);
-
- /* Write ICV */
- SEQSTORE(p, CONTEXT2, 0, ctx->authsize, 0);
-
- PATCH_JUMP(p, pskip_key_load, skip_key_load);
-
- PROGRAM_FINALIZE(p);
-
+ /* aead_encrypt shared descriptor */
+ desc = ctx->sh_desc_enc;
+ cnstr_shdsc_aead_encap(desc, ps, &cipherdata, &authdata, tfm->ivsize,
+ ctx->authsize);
desc_bytes = DESC_BYTES(desc);
ctx->sh_desc_enc_dma = dma_map_single(jrdev, desc, desc_bytes,
DMA_TO_DEVICE);
@@ -440,93 +243,20 @@ static int aead_set_sh_desc(struct crypto_aead *aead)
* Job Descriptor and Shared Descriptors
* must all fit into the 64-word Descriptor h/w Buffer
*/
- keys_fit_inline = false;
- if (DESC_AEAD_DEC_LEN + DESC_JOB_IO_LEN +
- ctx->split_key_pad_len + ctx->enckeylen <=
- CAAM_DESC_BYTES_MAX)
- keys_fit_inline = true;
-
- /* aead_decrypt shared descriptor */
- desc = ctx->sh_desc_dec;
- PROGRAM_CNTXT_INIT(p, desc, 0);
- if (ps)
- PROGRAM_SET_36BIT_ADDR(p);
-
- /* aead_decrypt shared descriptor */
- SHR_HDR(p, SHR_SERIAL, 1, 0);
-
- /* Skip key loading if already shared */
- pskip_key_load = JUMP(p, skip_key_load, LOCAL_JUMP, ALL_TRUE, SHRD);
-
- if (keys_fit_inline) {
- KEY(p, MDHA_SPLIT_KEY, ENC, (uintptr_t)ctx->key,
- ctx->split_key_len, IMMED | COPY);
- KEY(p, KEY1, 0, (uintptr_t)(ctx->key + ctx->split_key_pad_len),
- ctx->enckeylen, IMMED | COPY);
+ if (rem_bytes >= DESC_AEAD_DEC_LEN) {
+ authdata.key = (uintptr_t)ctx->key;
+ authdata.key_type = RTA_DATA_IMM;
} else {
- KEY(p, MDHA_SPLIT_KEY, ENC, ctx->key_dma, ctx->split_key_len,
- 0);
- KEY(p, KEY1, 0, ctx->key_dma + ctx->split_key_pad_len,
- ctx->enckeylen, 0);
+ authdata.key = ctx->key_dma;
+ authdata.key_type = RTA_DATA_PTR;
}
+ cipherdata.key = authdata.key + ctx->split_key_pad_len;
+ cipherdata.key_type = authdata.key_type;

- SET_LABEL(p, skip_key_load);
-
- /* Class 2 operation */
- ALG_OPERATION(p, ctx->class2_alg_type & OP_ALG_ALGSEL_MASK,
- ctx->class2_alg_type & OP_ALG_AAI_MASK,
- OP_ALG_AS_INITFINAL, ICV_CHECK_ENABLE, DIR_DEC);
-
- /* assoclen + cryptlen = seqinlen - ivsize - authsize */
- MATHB(p, SEQINSZ, SUB, ctx->authsize + tfm->ivsize, MATH3, CAAM_CMD_SZ,
- IMMED2);
- /* assoclen = (assoclen + cryptlen) - cryptlen */
- MATHB(p, SEQOUTSZ, SUB, MATH0, MATH2, CAAM_CMD_SZ, 0);
- MATHB(p, MATH3, SUB, MATH2, VSEQINSZ, CAAM_CMD_SZ, 0);
-
- /* read assoc before reading payload */
- SEQFIFOLOAD(p, MSG2, 0 , VLF);
-
- /* read iv for both classes */
- SEQLOAD(p, CONTEXT1, 0, tfm->ivsize, 0);
- MOVE(p, CONTEXT1, 0, IFIFOAB2, 0, tfm->ivsize, IMMED);
-
- /* Set DK bit in class 1 operation if shared (AES only) */
- if ((ctx->class1_alg_type & OP_ALG_ALGSEL_MASK) == OP_ALG_ALGSEL_AES) {
- pset_dk = JUMP(p, set_dk, LOCAL_JUMP, ALL_TRUE, SHRD);
- ALG_OPERATION(p, ctx->class1_alg_type & OP_ALG_ALGSEL_MASK,
- ctx->class1_alg_type & OP_ALG_AAI_MASK,
- OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, DIR_DEC);
- pskip_dk = JUMP(p, skip_dk, LOCAL_JUMP, ALL_TRUE, 0);
- SET_LABEL(p, set_dk);
- ALG_OPERATION(p, ctx->class1_alg_type & OP_ALG_ALGSEL_MASK,
- (ctx->class1_alg_type & OP_ALG_AAI_MASK) |
- OP_ALG_AAI_DK, OP_ALG_AS_INITFINAL,
- ICV_CHECK_DISABLE, DIR_DEC);
- SET_LABEL(p, skip_dk);
- } else {
- ALG_OPERATION(p, ctx->class1_alg_type & OP_ALG_ALGSEL_MASK,
- ctx->class1_alg_type & OP_ALG_AAI_MASK,
- OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, DIR_DEC);
- }
-
- /* Read and write cryptlen bytes */
- MATHB(p, ZERO, ADD, MATH2, VSEQINSZ, CAAM_CMD_SZ, 0);
- MATHB(p, ZERO, ADD, MATH2, VSEQOUTSZ, CAAM_CMD_SZ, 0);
-
- /* Read and write payload */
- SEQFIFOSTORE(p, MSG, 0, 0, VLF);
- SEQFIFOLOAD(p, MSGINSNOOP, 0, VLF | LAST1 | LAST2);
-
- /* Load ICV */
- SEQFIFOLOAD(p, ICV2, ctx->authsize, LAST2);
-
- PATCH_JUMP(p, pskip_key_load, skip_key_load);
- PATCH_JUMP(p, pset_dk, set_dk);
- PATCH_JUMP(p, pskip_dk, skip_dk);
-
- PROGRAM_FINALIZE(p);
-
+ /* aead_decrypt shared descriptor */
+ desc = ctx->sh_desc_dec;
+ cnstr_shdsc_aead_decap(desc, ps, &cipherdata, &authdata, tfm->ivsize,
+ ctx->authsize);
desc_bytes = DESC_BYTES(desc);
ctx->sh_desc_dec_dma = dma_map_single(jrdev, desc, desc_bytes,
DMA_TO_DEVICE);
@@ -543,95 +273,20 @@ static int aead_set_sh_desc(struct crypto_aead *aead)
* Job Descriptor and Shared Descriptors
* must all fit into the 64-word Descriptor h/w Buffer
*/
- keys_fit_inline = false;
- if (DESC_AEAD_GIVENC_LEN + DESC_JOB_IO_LEN +
- ctx->split_key_pad_len + ctx->enckeylen <=
- CAAM_DESC_BYTES_MAX)
- keys_fit_inline = true;
-
- /* aead_givencrypt shared descriptor */
- desc = ctx->sh_desc_givenc;
- PROGRAM_CNTXT_INIT(p, desc, 0);
- if (ps)
- PROGRAM_SET_36BIT_ADDR(p);
-
- SHR_HDR(p, SHR_SERIAL, 1, 0);
-
- /* Skip key loading if already shared */
- pskip_key_load = JUMP(p, skip_key_load, LOCAL_JUMP, ALL_TRUE, SHRD);
-
- if (keys_fit_inline) {
- KEY(p, MDHA_SPLIT_KEY, ENC, (uintptr_t)ctx->key,
- ctx->split_key_len, IMMED | COPY);
- KEY(p, KEY1, 0,
- (uintptr_t)(ctx->key + ctx->split_key_pad_len),
- ctx->enckeylen, IMMED | COPY);
+ if (rem_bytes >= DESC_AEAD_GIVENC_LEN) {
+ authdata.key = (uintptr_t)ctx->key;
+ authdata.key_type = RTA_DATA_IMM;
} else {
- KEY(p, MDHA_SPLIT_KEY, ENC, ctx->key_dma, ctx->split_key_len,
- 0);
- KEY(p, KEY1, 0, ctx->key_dma + ctx->split_key_pad_len,
- ctx->enckeylen, 0);
+ authdata.key = ctx->key_dma;
+ authdata.key_type = RTA_DATA_PTR;
}
+ cipherdata.key = authdata.key + ctx->split_key_pad_len;
+ cipherdata.key_type = authdata.key_type;

- SET_LABEL(p, skip_key_load);
-
- /* Generate IV */
- geniv = NFIFOENTRY_STYPE_PAD | NFIFOENTRY_DEST_DECO |
- NFIFOENTRY_DTYPE_MSG | NFIFOENTRY_LC1 |
- NFIFOENTRY_PTYPE_RND | (tfm->ivsize << NFIFOENTRY_DLEN_SHIFT);
- LOAD(p, geniv, NFIFO, 0, CAAM_CMD_SZ, IMMED);
- LOAD(p, 0, DCTRL, LDOFF_DISABLE_AUTO_NFIFO, 0, IMMED);
- MOVE(p, IFIFOABD, 0, CONTEXT1, 0, tfm->ivsize, IMMED);
- LOAD(p, 0, DCTRL, LDOFF_ENABLE_AUTO_NFIFO, 0, IMMED);
-
- /* Copy IV to class 1 context */
- MOVE(p, CONTEXT1, 0, OFIFO, 0, tfm->ivsize, IMMED);
-
- /* Return to encryption */
- ALG_OPERATION(p, ctx->class2_alg_type & OP_ALG_ALGSEL_MASK,
- ctx->class2_alg_type & OP_ALG_AAI_MASK,
- OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, DIR_ENC);
-
- /* ivsize + cryptlen = seqoutlen - authsize */
- MATHB(p, SEQOUTSZ, SUB, ctx->authsize, MATH3, CAAM_CMD_SZ, IMMED2);
-
- /* assoclen = seqinlen - (ivsize + cryptlen) */
- MATHB(p, SEQINSZ, SUB, MATH3, VSEQINSZ, CAAM_CMD_SZ, 0);
-
- /* read assoc before reading payload */
- SEQFIFOLOAD(p, MSG2, 0, VLF);
-
- /* Copy iv from class 1 ctx to class 2 fifo*/
- moveiv = NFIFOENTRY_STYPE_OFIFO | NFIFOENTRY_DEST_CLASS2 |
- NFIFOENTRY_DTYPE_MSG | (tfm->ivsize << NFIFOENTRY_DLEN_SHIFT);
- LOAD(p, moveiv, NFIFO, 0, CAAM_CMD_SZ, IMMED);
- LOAD(p, tfm->ivsize, DATA2SZ, 0, CAAM_CMD_SZ, IMMED);
-
- /* Class 1 operation */
- ALG_OPERATION(p, ctx->class1_alg_type & OP_ALG_ALGSEL_MASK,
- ctx->class1_alg_type & OP_ALG_AAI_MASK,
- OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, DIR_ENC);
-
- /* Will write ivsize + cryptlen */
- MATHB(p, SEQINSZ, ADD, MATH0, VSEQOUTSZ, CAAM_CMD_SZ, 0);
-
- /* Not need to reload iv */
- SEQFIFOLOAD(p, SKIP, tfm->ivsize, 0);
-
- /* Will read cryptlen */
- MATHB(p, SEQINSZ, ADD, MATH0, VSEQINSZ, CAAM_CMD_SZ, 0);
-
- /* Read and write payload */
- SEQFIFOSTORE(p, MSG, 0, 0, VLF);
- SEQFIFOLOAD(p, MSGOUTSNOOP, 0, VLF | LAST1 | LAST2);
-
- /* Write ICV */
- SEQSTORE(p, CONTEXT2, 0, ctx->authsize, 0);
-
- PATCH_JUMP(p, pskip_key_load, skip_key_load);
-
- PROGRAM_FINALIZE(p);
-
+ /* aead_givencrypt shared descriptor */
+ desc = ctx->sh_desc_givenc;
+ cnstr_shdsc_aead_givencap(desc, ps, &cipherdata, &authdata, tfm->ivsize,
+ ctx->authsize);
desc_bytes = DESC_BYTES(desc);
ctx->sh_desc_givenc_dma = dma_map_single(jrdev, desc, desc_bytes,
DMA_TO_DEVICE);
@@ -737,15 +392,8 @@ static int ablkcipher_setkey(struct crypto_ablkcipher *ablkcipher,
struct device *jrdev = ctx->jrdev;
int ret = 0;
u32 *desc;
- struct program prg;
- struct program *p = &prg;
unsigned desc_bytes;
- LABEL(skip_key_load);
- REFERENCE(pskip_key_load);
- LABEL(set_dk);
- REFERENCE(pset_dk);
- LABEL(skip_dk);
- REFERENCE(pskip_dk);
+ struct alginfo cipherdata;

#ifdef DEBUG
print_hex_dump(KERN_ERR, "key in @"__stringify(__LINE__)": ",
@@ -753,48 +401,18 @@ static int ablkcipher_setkey(struct crypto_ablkcipher *ablkcipher,
#endif

memcpy(ctx->key, key, keylen);
- ctx->key_dma = dma_map_single(jrdev, ctx->key, keylen,
- DMA_TO_DEVICE);
- if (dma_mapping_error(jrdev, ctx->key_dma)) {
- dev_err(jrdev, "unable to map key i/o memory\n");
- return -ENOMEM;
- }
ctx->enckeylen = keylen;

+ cipherdata.algtype = ctx->class1_alg_type & OP_ALG_ALGSEL_MASK;
+ cipherdata.key_enc_flags = 0;
+ cipherdata.keylen = ctx->enckeylen;
+ cipherdata.key = (uintptr_t)ctx->key;
+ cipherdata.key_type = RTA_DATA_IMM;
+
/* ablkcipher_encrypt shared descriptor */
desc = ctx->sh_desc_enc;
- PROGRAM_CNTXT_INIT(p, desc, 0);
- if (ps)
- PROGRAM_SET_36BIT_ADDR(p);
-
- SHR_HDR(p, SHR_SERIAL, 1, 0);
-
- /* Skip key loading if already shared */
- pskip_key_load = JUMP(p, skip_key_load, LOCAL_JUMP, ALL_TRUE, SHRD);
-
- /* Load class1 key only */
- KEY(p, KEY1, 0, (uintptr_t)ctx->key, ctx->enckeylen, IMMED | COPY);
-
- SET_LABEL(p, skip_key_load);
-
- /* Load IV */
- SEQLOAD(p, CONTEXT1, 0, tfm->ivsize, 0);
-
- /* Load operation */
- ALG_OPERATION(p, ctx->class1_alg_type & OP_ALG_ALGSEL_MASK,
- ctx->class1_alg_type & OP_ALG_AAI_MASK,
- OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, DIR_ENC);
-
- /* Perform operation */
- MATHB(p, SEQINSZ, ADD, MATH0, VSEQOUTSZ, 4, 0);
- MATHB(p, SEQINSZ, ADD, MATH0, VSEQINSZ, 4, 0);
- SEQFIFOLOAD(p, MSG1, 0, VLF | LAST1);
- SEQFIFOSTORE(p, MSG, 0, 0, VLF);
-
- PATCH_JUMP(p, pskip_key_load, skip_key_load);
-
- PROGRAM_FINALIZE(p);
-
+ cnstr_shdsc_cbc_blkcipher(desc, ps, &cipherdata, NULL, tfm->ivsize,
+ DIR_ENC);
desc_bytes = DESC_BYTES(desc);
ctx->sh_desc_enc_dma = dma_map_single(jrdev, desc, desc_bytes,
DMA_TO_DEVICE);
@@ -810,54 +428,8 @@ static int ablkcipher_setkey(struct crypto_ablkcipher *ablkcipher,

/* ablkcipher_decrypt shared descriptor */
desc = ctx->sh_desc_dec;
- PROGRAM_CNTXT_INIT(p, desc, 0);
- if (ps)
- PROGRAM_SET_36BIT_ADDR(p);
-
- SHR_HDR(p, SHR_SERIAL, 1, 0);
-
- /* Skip key loading if already shared */
- pskip_key_load = JUMP(p, skip_key_load, LOCAL_JUMP, ALL_TRUE, SHRD);
-
- /* Load class1 key only */
- KEY(p, KEY1, 0, (uintptr_t)ctx->key, ctx->enckeylen, IMMED | COPY);
-
- SET_LABEL(p, skip_key_load);
-
- /* load IV */
- SEQLOAD(p, CONTEXT1, 0, tfm->ivsize, 0);
-
- /* Set DK bit in class 1 operation if shared (AES only) */
- if ((ctx->class1_alg_type & OP_ALG_ALGSEL_MASK) == OP_ALG_ALGSEL_AES) {
- pset_dk = JUMP(p, set_dk, LOCAL_JUMP, ALL_TRUE, SHRD);
- ALG_OPERATION(p, ctx->class1_alg_type & OP_ALG_ALGSEL_MASK,
- ctx->class1_alg_type & OP_ALG_AAI_MASK,
- OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, DIR_DEC);
- pskip_dk = JUMP(p, skip_dk, LOCAL_JUMP, ALL_TRUE, 0);
- SET_LABEL(p, set_dk);
- ALG_OPERATION(p, ctx->class1_alg_type & OP_ALG_ALGSEL_MASK,
- (ctx->class1_alg_type & OP_ALG_AAI_MASK) |
- OP_ALG_AAI_DK, OP_ALG_AS_INITFINAL,
- ICV_CHECK_DISABLE, DIR_DEC);
- SET_LABEL(p, skip_dk);
- } else {
- ALG_OPERATION(p, ctx->class1_alg_type & OP_ALG_ALGSEL_MASK,
- ctx->class1_alg_type & OP_ALG_AAI_MASK,
- OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, DIR_DEC);
- }
-
- /* Perform operation */
- MATHB(p, SEQINSZ, ADD, MATH0, VSEQOUTSZ, 4, 0);
- MATHB(p, SEQINSZ, ADD, MATH0, VSEQINSZ, 4, 0);
- SEQFIFOLOAD(p, MSG1, 0, VLF | LAST1);
- SEQFIFOSTORE(p, MSG, 0, 0, VLF);
-
- PATCH_JUMP(p, pskip_key_load, skip_key_load);
- PATCH_JUMP(p, pset_dk, set_dk);
- PATCH_JUMP(p, pskip_dk, skip_dk);
-
- PROGRAM_FINALIZE(p);
-
+ cnstr_shdsc_cbc_blkcipher(desc, ps, &cipherdata, NULL, tfm->ivsize,
+ DIR_DEC);
desc_bytes = DESC_BYTES(desc);
ctx->sh_desc_dec_dma = dma_map_single(jrdev, desc, desc_bytes,
DMA_TO_DEVICE);
diff --git a/drivers/crypto/caam/flib/desc/algo.h b/drivers/crypto/caam/flib/desc/algo.h
new file mode 100644
index 000000000000..652d7f55f5e6
--- /dev/null
+++ b/drivers/crypto/caam/flib/desc/algo.h
@@ -0,0 +1,88 @@
+/* Copyright 2008-2013 Freescale Semiconductor, Inc. */
+
+#ifndef __DESC_ALGO_H__
+#define __DESC_ALGO_H__
+
+#include "flib/rta.h"
+#include "common.h"
+
+/**
+ * DOC: Algorithms - Shared Descriptor Constructors
+ *
+ * Shared descriptors for algorithms (i.e. not for protocols).
+ */
+
+/**
+ * cnstr_shdsc_cbc_blkcipher - CBC block cipher
+ * @descbuf: pointer to descriptor-under-construction buffer
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @cipherdata: pointer to block cipher transform definitions
+ * @iv: IV data; if NULL, "ivlen" bytes from the input frame will be read as IV
+ * @ivlen: IV length
+ * @dir: DIR_ENCRYPT/DIR_DECRYPT
+ *
+ * Return: size of descriptor written in words
+ */
+static inline int cnstr_shdsc_cbc_blkcipher(uint32_t *descbuf, bool ps,
+ struct alginfo *cipherdata, uint8_t *iv,
+ uint32_t ivlen, uint8_t dir)
+{
+ struct program prg;
+ struct program *p = &prg;
+ const bool is_aes_dec = (dir == DIR_DEC) &&
+ (cipherdata->algtype == OP_ALG_ALGSEL_AES);
+ LABEL(keyjmp);
+ LABEL(skipdk);
+ REFERENCE(pkeyjmp);
+ REFERENCE(pskipdk);
+
+ PROGRAM_CNTXT_INIT(p, descbuf, 0);
+ if (ps)
+ PROGRAM_SET_36BIT_ADDR(p);
+ SHR_HDR(p, SHR_SERIAL, 1, SC);
+
+ pkeyjmp = JUMP(p, keyjmp, LOCAL_JUMP, ALL_TRUE, SHRD);
+ /* Insert Key */
+ KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+ cipherdata->keylen, INLINE_KEY(cipherdata));
+
+ if (is_aes_dec) {
+ ALG_OPERATION(p, cipherdata->algtype, OP_ALG_AAI_CBC,
+ OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, dir);
+
+ pskipdk = JUMP(p, skipdk, LOCAL_JUMP, ALL_TRUE, 0);
+ }
+ SET_LABEL(p, keyjmp);
+
+ if (is_aes_dec) {
+ ALG_OPERATION(p, OP_ALG_ALGSEL_AES, OP_ALG_AAI_CBC |
+ OP_ALG_AAI_DK, OP_ALG_AS_INITFINAL,
+ ICV_CHECK_DISABLE, dir);
+ SET_LABEL(p, skipdk);
+ } else {
+ ALG_OPERATION(p, cipherdata->algtype, OP_ALG_AAI_CBC,
+ OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, dir);
+ }
+
+ if (iv)
+ /* IV load, convert size */
+ LOAD(p, (uintptr_t)iv, CONTEXT1, 0, ivlen, IMMED | COPY);
+ else
+ /* IV is present first before the actual message */
+ SEQLOAD(p, CONTEXT1, 0, ivlen, 0);
+
+ MATHB(p, SEQINSZ, SUB, MATH2, VSEQINSZ, 4, 0);
+ MATHB(p, SEQINSZ, SUB, MATH2, VSEQOUTSZ, 4, 0);
+
+ /* Insert sequence load/store with VLF */
+ SEQFIFOLOAD(p, MSG1, 0, VLF | LAST1);
+ SEQFIFOSTORE(p, MSG, 0, 0, VLF);
+
+ PATCH_JUMP(p, pkeyjmp, keyjmp);
+ if (is_aes_dec)
+ PATCH_JUMP(p, pskipdk, skipdk);
+
+ return PROGRAM_FINALIZE(p);
+}
+
+#endif /* __DESC_ALGO_H__ */
diff --git a/drivers/crypto/caam/flib/desc/ipsec.h b/drivers/crypto/caam/flib/desc/ipsec.h
new file mode 100644
index 000000000000..b5436133d26c
--- /dev/null
+++ b/drivers/crypto/caam/flib/desc/ipsec.h
@@ -0,0 +1,550 @@
+/* Copyright 2008-2013 Freescale Semiconductor, Inc. */
+
+#ifndef __DESC_IPSEC_H__
+#define __DESC_IPSEC_H__
+
+#include "flib/rta.h"
+#include "common.h"
+
+/**
+ * DOC: IPsec Shared Descriptor Constructors
+ *
+ * Shared descriptors for IPsec protocol.
+ */
+
+#define DESC_AEAD_BASE (4 * CAAM_CMD_SZ)
+
+/**
+ * DESC_AEAD_ENC_LEN - Length of descriptor built by cnstr_shdsc_aead_encap().
+ *
+ * Does not account for the key lengths. It is intended to be used by upper
+ * layers to determine whether keys can be inlined or not.
+ */
+#define DESC_AEAD_ENC_LEN (DESC_AEAD_BASE + 15 * CAAM_CMD_SZ)
+
+/**
+ * cnstr_shdsc_aead_encap - IPSec ESP encapsulation shared descriptor
+ * (non-protocol).
+ * @descbuf: pointer to buffer used for descriptor construction
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @cipherdata: pointer to block cipher transform definitions
+ * Valid algorithm values - one of OP_ALG_ALGSEL_{AES, DES, 3DES}
+ * ANDed with OP_ALG_AAI_CBC.
+ * @authdata: pointer to authentication transform definitions. Note that since a
+ * split key is to be used, the size of the split key itself is
+ * specified. Valid algorithm values - one of OP_ALG_ALGSEL_{MD5,
+ * SHA1, SHA224, SHA256, SHA384, SHA512} ANDed with
+ * OP_ALG_AAI_HMAC_PRECOMP.
+ * @ivsize: initialization vector size
+ * @icvsize: integrity check value (ICV) size (truncated or full)
+ *
+ * Note: Requires an MDHA split key.
+ *
+ * Return: size of descriptor written in words
+ */
+static inline int cnstr_shdsc_aead_encap(uint32_t *descbuf, bool ps,
+ struct alginfo *cipherdata,
+ struct alginfo *authdata,
+ unsigned int ivsize,
+ unsigned int icvsize)
+{
+ struct program prg;
+ struct program *p = &prg;
+
+ LABEL(skip_key_load);
+ REFERENCE(pskip_key_load);
+
+ PROGRAM_CNTXT_INIT(p, descbuf, 0);
+ if (ps)
+ PROGRAM_SET_36BIT_ADDR(p);
+
+ SHR_HDR(p, SHR_SERIAL, 1, 0);
+
+ /* Skip key loading if already shared */
+ pskip_key_load = JUMP(p, skip_key_load, LOCAL_JUMP, ALL_TRUE, SHRD);
+ KEY(p, MDHA_SPLIT_KEY, authdata->key_enc_flags, authdata->key,
+ authdata->keylen, INLINE_KEY(authdata));
+ KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+ cipherdata->keylen, INLINE_KEY(cipherdata));
+ SET_LABEL(p, skip_key_load);
+
+ /* Class 2 operation */
+ ALG_OPERATION(p, authdata->algtype & OP_ALG_ALGSEL_MASK,
+ authdata->algtype & OP_ALG_AAI_MASK, OP_ALG_AS_INITFINAL,
+ ICV_CHECK_DISABLE, DIR_ENC);
+
+ /* cryptlen = seqoutlen - authsize */
+ MATHB(p, SEQOUTSZ, SUB, icvsize, MATH3, CAAM_CMD_SZ, IMMED2);
+
+ /* assoclen + cryptlen = seqinlen - ivsize */
+ MATHB(p, SEQINSZ, SUB, ivsize, MATH2, CAAM_CMD_SZ, IMMED2);
+
+ /* assoclen = (assoclen + cryptlen) - cryptlen */
+ MATHB(p, MATH2, SUB, MATH3, VSEQINSZ, CAAM_CMD_SZ, 0);
+
+ /* read assoc before reading payload */
+ SEQFIFOLOAD(p, MSG2, 0 , VLF);
+
+ /* read iv for both classes */
+ SEQLOAD(p, CONTEXT1, 0, ivsize, 0);
+ MOVE(p, CONTEXT1, 0, IFIFOAB2, 0, ivsize, IMMED);
+
+ /* Class 1 operation */
+ ALG_OPERATION(p, cipherdata->algtype & OP_ALG_ALGSEL_MASK,
+ cipherdata->algtype & OP_ALG_AAI_MASK,
+ OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, DIR_ENC);
+
+ /* Read and write cryptlen bytes */
+ MATHB(p, ZERO, ADD, MATH3, VSEQINSZ, CAAM_CMD_SZ, 0);
+ MATHB(p, ZERO, ADD, MATH3, VSEQOUTSZ, CAAM_CMD_SZ, 0);
+
+ /* Read and write payload */
+ SEQFIFOSTORE(p, MSG, 0, 0, VLF);
+ SEQFIFOLOAD(p, MSGOUTSNOOP, 0, VLF | LAST1 | LAST2);
+
+ /* Write ICV */
+ SEQSTORE(p, CONTEXT2, 0, icvsize, 0);
+
+ PATCH_JUMP(p, pskip_key_load, skip_key_load);
+
+ return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * DESC_AEAD_GIVENC_LEN - Length of descriptor built by
+ * cnstr_shdsc_aead_givencap().
+ *
+ * Does not account for the key lengths. It is intended to be used by upper
+ * layers to determine whether keys can be inlined or not.
+ */
+#define DESC_AEAD_GIVENC_LEN (DESC_AEAD_ENC_LEN + 7 * CAAM_CMD_SZ)
+
+/**
+ * cnstr_shdsc_aead_givencap - IPSec ESP encapsulation shared descriptor
+ * (non-protocol) with HW-generated initialization
+ * vector.
+ * @descbuf: pointer to buffer used for descriptor construction
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @cipherdata: pointer to block cipher transform definitions
+ * Valid algorithm values - one of OP_ALG_ALGSEL_{AES, DES, 3DES}
+ * ANDed with OP_ALG_AAI_CBC.
+ * @authdata: pointer to authentication transform definitions. Note that since a
+ * split key is to be used, the size of the split key itself is
+ * specified. Valid algorithm values - one of OP_ALG_ALGSEL_{MD5,
+ * SHA1, SHA224, SHA256, SHA384, SHA512} ANDed with
+ * OP_ALG_AAI_HMAC_PRECOMP.
+ * @ivsize: initialization vector size
+ * @icvsize: integrity check value (ICV) size (truncated or full)
+ *
+ * Note: Requires an MDHA split key.
+ *
+ * Return: size of descriptor written in words
+ */
+static inline int cnstr_shdsc_aead_givencap(uint32_t *descbuf, bool ps,
+ struct alginfo *cipherdata,
+ struct alginfo *authdata,
+ unsigned int ivsize,
+ unsigned int icvsize)
+{
+ struct program prg;
+ struct program *p = &prg;
+ uint32_t geniv, moveiv;
+
+ LABEL(skip_key_load);
+ REFERENCE(pskip_key_load);
+
+ PROGRAM_CNTXT_INIT(p, descbuf, 0);
+ if (ps)
+ PROGRAM_SET_36BIT_ADDR(p);
+
+ SHR_HDR(p, SHR_SERIAL, 1, 0);
+
+ /* Skip key loading if already shared */
+ pskip_key_load = JUMP(p, skip_key_load, LOCAL_JUMP, ALL_TRUE, SHRD);
+ KEY(p, MDHA_SPLIT_KEY, authdata->key_enc_flags, authdata->key,
+ authdata->keylen, INLINE_KEY(authdata));
+ KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+ cipherdata->keylen, INLINE_KEY(cipherdata));
+ SET_LABEL(p, skip_key_load);
+
+ /* Generate IV */
+ geniv = NFIFOENTRY_STYPE_PAD | NFIFOENTRY_DEST_DECO |
+ NFIFOENTRY_DTYPE_MSG | NFIFOENTRY_LC1 |
+ NFIFOENTRY_PTYPE_RND | (ivsize << NFIFOENTRY_DLEN_SHIFT);
+ LOAD(p, geniv, NFIFO, 0, CAAM_CMD_SZ, IMMED);
+ LOAD(p, 0, DCTRL, LDOFF_DISABLE_AUTO_NFIFO, 0, IMMED);
+ MOVE(p, IFIFOABD, 0, CONTEXT1, 0, ivsize, IMMED);
+ LOAD(p, 0, DCTRL, LDOFF_ENABLE_AUTO_NFIFO, 0, IMMED);
+
+ /* Copy IV to class 1 context */
+ MOVE(p, CONTEXT1, 0, OFIFO, 0, ivsize, IMMED);
+
+ /* Return to encryption */
+ ALG_OPERATION(p, authdata->algtype & OP_ALG_ALGSEL_MASK,
+ authdata->algtype & OP_ALG_AAI_MASK, OP_ALG_AS_INITFINAL,
+ ICV_CHECK_DISABLE, DIR_ENC);
+
+ /* ivsize + cryptlen = seqoutlen - authsize */
+ MATHB(p, SEQOUTSZ, SUB, icvsize, MATH3, CAAM_CMD_SZ, IMMED2);
+
+ /* assoclen = seqinlen - (ivsize + cryptlen) */
+ MATHB(p, SEQINSZ, SUB, MATH3, VSEQINSZ, CAAM_CMD_SZ, 0);
+
+ /* read assoc before reading payload */
+ SEQFIFOLOAD(p, MSG2, 0, VLF);
+
+ /* Copy iv from class 1 ctx to class 2 fifo*/
+ moveiv = NFIFOENTRY_STYPE_OFIFO | NFIFOENTRY_DEST_CLASS2 |
+ NFIFOENTRY_DTYPE_MSG | (ivsize << NFIFOENTRY_DLEN_SHIFT);
+ LOAD(p, moveiv, NFIFO, 0, CAAM_CMD_SZ, IMMED);
+ LOAD(p, ivsize, DATA2SZ, 0, CAAM_CMD_SZ, IMMED);
+
+ /* Class 1 operation */
+ ALG_OPERATION(p, cipherdata->algtype & OP_ALG_ALGSEL_MASK,
+ cipherdata->algtype & OP_ALG_AAI_MASK,
+ OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, DIR_ENC);
+
+ /* Will write ivsize + cryptlen */
+ MATHB(p, SEQINSZ, ADD, MATH0, VSEQOUTSZ, CAAM_CMD_SZ, 0);
+
+ /* Not need to reload iv */
+ SEQFIFOLOAD(p, SKIP, ivsize, 0);
+
+ /* Will read cryptlen */
+ MATHB(p, SEQINSZ, ADD, MATH0, VSEQINSZ, CAAM_CMD_SZ, 0);
+
+ /* Read and write payload */
+ SEQFIFOSTORE(p, MSG, 0, 0, VLF);
+ SEQFIFOLOAD(p, MSGOUTSNOOP, 0, VLF | LAST1 | LAST2);
+
+ /* Write ICV */
+ SEQSTORE(p, CONTEXT2, 0, icvsize, 0);
+
+ PATCH_JUMP(p, pskip_key_load, skip_key_load);
+
+ return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * DESC_AEAD_DEC_LEN - Length of descriptor built by cnstr_shdsc_aead_decap().
+ *
+ * Does not account for the key lengths. It is intended to be used by upper
+ * layers to determine whether keys can be inlined or not.
+ */
+#define DESC_AEAD_DEC_LEN (DESC_AEAD_BASE + 18 * CAAM_CMD_SZ)
+
+/**
+ * cnstr_shdsc_aead_decap - IPSec ESP decapsulation shared descriptor
+ * (non-protocol).
+ * @descbuf: pointer to buffer used for descriptor construction
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @cipherdata: pointer to block cipher transform definitions
+ * Valid algorithm values - one of OP_ALG_ALGSEL_{AES, DES, 3DES}
+ * ANDed with OP_ALG_AAI_CBC.
+ * @authdata: pointer to authentication transform definitions. Note that since a
+ * split key is to be used, the size of the split key itself is
+ * specified. Valid algorithm values - one of OP_ALG_ALGSEL_{MD5,
+ * SHA1, SHA224, SHA256, SHA384, SHA512} ANDed with
+ * OP_ALG_AAI_HMAC_PRECOMP.
+ * @ivsize: initialization vector size
+ * @icvsize: integrity check value (ICV) size (truncated or full)
+ *
+ * Note: Requires an MDHA split key.
+ *
+ * Return: size of descriptor written in words
+ */
+static inline int cnstr_shdsc_aead_decap(uint32_t *descbuf, bool ps,
+ struct alginfo *cipherdata,
+ struct alginfo *authdata,
+ unsigned int ivsize,
+ unsigned int icvsize)
+{
+ struct program prg;
+ struct program *p = &prg;
+
+ LABEL(skip_key_load);
+ REFERENCE(pskip_key_load);
+ LABEL(set_dk);
+ REFERENCE(pset_dk);
+ LABEL(skip_dk);
+ REFERENCE(pskip_dk);
+
+ PROGRAM_CNTXT_INIT(p, descbuf, 0);
+ if (ps)
+ PROGRAM_SET_36BIT_ADDR(p);
+
+ /* aead_decrypt shared descriptor */
+ SHR_HDR(p, SHR_SERIAL, 1, 0);
+
+ /* Skip key loading if already shared */
+ pskip_key_load = JUMP(p, skip_key_load, LOCAL_JUMP, ALL_TRUE, SHRD);
+ KEY(p, MDHA_SPLIT_KEY, authdata->key_enc_flags, authdata->key,
+ authdata->keylen, INLINE_KEY(authdata));
+ KEY(p, KEY1, cipherdata->key_enc_flags, cipherdata->key,
+ cipherdata->keylen, INLINE_KEY(cipherdata));
+ SET_LABEL(p, skip_key_load);
+
+ /* Class 2 operation */
+ ALG_OPERATION(p, authdata->algtype & OP_ALG_ALGSEL_MASK,
+ authdata->algtype & OP_ALG_AAI_MASK, OP_ALG_AS_INITFINAL,
+ ICV_CHECK_ENABLE, DIR_DEC);
+
+ /* assoclen + cryptlen = seqinlen - ivsize - authsize */
+ MATHB(p, SEQINSZ, SUB, icvsize + ivsize, MATH3, CAAM_CMD_SZ, IMMED2);
+ /* assoclen = (assoclen + cryptlen) - cryptlen */
+ MATHB(p, SEQOUTSZ, SUB, MATH0, MATH2, CAAM_CMD_SZ, 0);
+ MATHB(p, MATH3, SUB, MATH2, VSEQINSZ, CAAM_CMD_SZ, 0);
+
+ /* read assoc before reading payload */
+ SEQFIFOLOAD(p, MSG2, 0 , VLF);
+
+ /* read iv for both classes */
+ SEQLOAD(p, CONTEXT1, 0, ivsize, 0);
+ MOVE(p, CONTEXT1, 0, IFIFOAB2, 0, ivsize, IMMED);
+
+ /* Set DK bit in class 1 operation if shared (AES only) */
+ if ((cipherdata->algtype & OP_ALG_ALGSEL_MASK) == OP_ALG_ALGSEL_AES) {
+ pset_dk = JUMP(p, set_dk, LOCAL_JUMP, ALL_TRUE, SHRD);
+ ALG_OPERATION(p, cipherdata->algtype & OP_ALG_ALGSEL_MASK,
+ cipherdata->algtype & OP_ALG_AAI_MASK,
+ OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE,
+ DIR_DEC);
+ pskip_dk = JUMP(p, skip_dk, LOCAL_JUMP, ALL_TRUE, 0);
+ SET_LABEL(p, set_dk);
+ ALG_OPERATION(p, cipherdata->algtype & OP_ALG_ALGSEL_MASK,
+ (cipherdata->algtype & OP_ALG_AAI_MASK) |
+ OP_ALG_AAI_DK, OP_ALG_AS_INITFINAL,
+ ICV_CHECK_DISABLE, DIR_DEC);
+ SET_LABEL(p, skip_dk);
+ } else {
+ ALG_OPERATION(p, cipherdata->algtype & OP_ALG_ALGSEL_MASK,
+ cipherdata->algtype & OP_ALG_AAI_MASK,
+ OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE,
+ DIR_DEC);
+ }
+
+ /* Read and write cryptlen bytes */
+ MATHB(p, ZERO, ADD, MATH2, VSEQINSZ, CAAM_CMD_SZ, 0);
+ MATHB(p, ZERO, ADD, MATH2, VSEQOUTSZ, CAAM_CMD_SZ, 0);
+
+ /* Read and write payload */
+ SEQFIFOSTORE(p, MSG, 0, 0, VLF);
+ SEQFIFOLOAD(p, MSGINSNOOP, 0, VLF | LAST1 | LAST2);
+
+ /* Load ICV */
+ SEQFIFOLOAD(p, ICV2, icvsize, LAST2);
+
+ PATCH_JUMP(p, pskip_key_load, skip_key_load);
+ PATCH_JUMP(p, pset_dk, set_dk);
+ PATCH_JUMP(p, pskip_dk, skip_dk);
+
+ return PROGRAM_FINALIZE(p);
+}
+
+#define DESC_AEAD_NULL_BASE (3 * CAAM_CMD_SZ)
+
+/**
+ * DESC_AEAD_NULL_ENC_LEN - Length of descriptor built by
+ * cnstr_shdsc_aead_null_encap().
+ *
+ * Does not account for the key lengths. It is intended to be used by upper
+ * layers to determine whether keys can be inlined or not.
+ */
+#define DESC_AEAD_NULL_ENC_LEN (DESC_AEAD_NULL_BASE + 14 * CAAM_CMD_SZ)
+
+/**
+ * cnstr_shdsc_aead_null_encap - IPSec ESP encapsulation shared descriptor
+ * (non-protocol) with no (null) encryption.
+ * @descbuf: pointer to buffer used for descriptor construction
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @authdata: pointer to authentication transform definitions. Note that since a
+ * split key is to be used, the size of the split key itself is
+ * specified. Valid algorithm values - one of OP_ALG_ALGSEL_{MD5,
+ * SHA1, SHA224, SHA256, SHA384, SHA512} ANDed with
+ * OP_ALG_AAI_HMAC_PRECOMP.
+ * @icvsize: integrity check value (ICV) size (truncated or full)
+ *
+ * Note: Requires an MDHA split key.
+ *
+ * Return: size of descriptor written in words
+ */
+static inline int cnstr_shdsc_aead_null_encap(uint32_t *descbuf, bool ps,
+ struct alginfo *authdata,
+ unsigned int icvsize)
+{
+ struct program prg;
+ struct program *p = &prg;
+
+ LABEL(skip_key_load);
+ REFERENCE(pskip_key_load);
+ LABEL(read_move_cmd);
+ REFERENCE(pread_move_cmd);
+ LABEL(write_move_cmd);
+ REFERENCE(pwrite_move_cmd);
+
+ PROGRAM_CNTXT_INIT(p, descbuf, 0);
+ if (ps)
+ PROGRAM_SET_36BIT_ADDR(p);
+
+ SHR_HDR(p, SHR_SERIAL, 1, 0);
+
+ /* Skip if already shared */
+ pskip_key_load = JUMP(p, skip_key_load, LOCAL_JUMP, ALL_TRUE, SHRD);
+ KEY(p, MDHA_SPLIT_KEY, authdata->key_enc_flags, authdata->key,
+ authdata->keylen, INLINE_KEY(authdata));
+ SET_LABEL(p, skip_key_load);
+
+ /* cryptlen = seqoutlen - authsize */
+ MATHB(p, SEQOUTSZ, SUB, icvsize, MATH3, CAAM_CMD_SZ, IMMED2);
+
+ /*
+ * NULL encryption; IV is zero
+ * assoclen = (assoclen + cryptlen) - cryptlen
+ */
+ MATHB(p, SEQINSZ, SUB, MATH3, VSEQINSZ, CAAM_CMD_SZ, 0);
+
+ /* read assoc before reading payload */
+ SEQFIFOLOAD(p, MSG2, 0 , VLF);
+
+ /* Prepare to read and write cryptlen bytes */
+ MATHB(p, ZERO, ADD, MATH3, VSEQINSZ, CAAM_CMD_SZ, 0);
+ MATHB(p, ZERO, ADD, MATH3, VSEQOUTSZ, CAAM_CMD_SZ, 0);
+
+ /*
+ * MOVE_LEN opcode is not available in all SEC HW revisions,
+ * thus need to do some magic, i.e. self-patch the descriptor buffer.
+ */
+ pread_move_cmd = MOVE(p, DESCBUF, 0, MATH3, 0, 6, IMMED);
+ pwrite_move_cmd = MOVE(p, MATH3, 0, DESCBUF, 0, 8, WAITCOMP | IMMED);
+
+ /* Class 2 operation */
+ ALG_OPERATION(p, authdata->algtype & OP_ALG_ALGSEL_MASK,
+ authdata->algtype & OP_ALG_AAI_MASK, OP_ALG_AS_INITFINAL,
+ ICV_CHECK_DISABLE, DIR_ENC);
+
+ /* Read and write cryptlen bytes */
+ SEQFIFOSTORE(p, MSG, 0, 0, VLF);
+ SEQFIFOLOAD(p, MSGINSNOOP, 0, VLF | LAST1 | LAST2 | FLUSH1);
+
+ SET_LABEL(p, read_move_cmd);
+ SET_LABEL(p, write_move_cmd);
+ LOAD(p, 0, DCTRL, LDOFF_DISABLE_AUTO_NFIFO, 0, IMMED);
+ MOVE(p, IFIFOAB1, 0, OFIFO, 0, 0, IMMED);
+
+ /* Write ICV */
+ SEQSTORE(p, CONTEXT2, 0, icvsize, 0);
+
+ PATCH_JUMP(p, pskip_key_load, skip_key_load);
+ PATCH_MOVE(p, pread_move_cmd, read_move_cmd);
+ PATCH_MOVE(p, pwrite_move_cmd, write_move_cmd);
+
+ return PROGRAM_FINALIZE(p);
+}
+
+/**
+ * DESC_AEAD_NULL_DEC_LEN - Length of descriptor built by
+ * cnstr_shdsc_aead_null_decap().
+ *
+ * Does not account for the key lengths. It is intended to be used by upper
+ * layers to determine whether keys can be inlined or not.
+ */
+#define DESC_AEAD_NULL_DEC_LEN (DESC_AEAD_NULL_BASE + 17 * CAAM_CMD_SZ)
+
+/**
+ * cnstr_shdsc_aead_null_decap - IPSec ESP decapsulation shared descriptor
+ * (non-protocol) with no (null) decryption.
+ * @descbuf: pointer to buffer used for descriptor construction
+ * @ps: if 36/40bit addressing is desired, this parameter must be true
+ * @authdata: pointer to authentication transform definitions. Note that since a
+ * split key is to be used, the size of the split key itself is
+ * specified. Valid algorithm values - one of OP_ALG_ALGSEL_{MD5,
+ * SHA1, SHA224, SHA256, SHA384, SHA512} ANDed with
+ * OP_ALG_AAI_HMAC_PRECOMP.
+ * @icvsize: integrity check value (ICV) size (truncated or full)
+ *
+ * Note: Requires an MDHA split key.
+ *
+ * Return: size of descriptor written in words
+ */
+static inline int cnstr_shdsc_aead_null_decap(uint32_t *descbuf, bool ps,
+ struct alginfo *authdata,
+ unsigned int icvsize)
+{
+ struct program prg;
+ struct program *p = &prg;
+
+ LABEL(skip_key_load);
+ REFERENCE(pskip_key_load);
+ LABEL(nop_cmd);
+ REFERENCE(pnop_cmd);
+ LABEL(read_move_cmd);
+ REFERENCE(pread_move_cmd);
+ LABEL(write_move_cmd);
+ REFERENCE(pwrite_move_cmd);
+
+ PROGRAM_CNTXT_INIT(p, descbuf, 0);
+ if (ps)
+ PROGRAM_SET_36BIT_ADDR(p);
+
+ SHR_HDR(p, SHR_SERIAL, 1, 0);
+
+ /* Skip if already shared */
+ pskip_key_load = JUMP(p, skip_key_load, LOCAL_JUMP, ALL_TRUE, SHRD);
+ KEY(p, MDHA_SPLIT_KEY, authdata->key_enc_flags, authdata->key,
+ authdata->keylen, INLINE_KEY(authdata));
+ SET_LABEL(p, skip_key_load);
+
+ /* Class 2 operation */
+ ALG_OPERATION(p, authdata->algtype & OP_ALG_ALGSEL_MASK,
+ authdata->algtype & OP_ALG_AAI_MASK, OP_ALG_AS_INITFINAL,
+ ICV_CHECK_ENABLE, DIR_DEC);
+
+ /* assoclen + cryptlen = seqinlen - authsize */
+ MATHB(p, SEQINSZ, SUB, icvsize, MATH3, CAAM_CMD_SZ, IMMED2);
+ /* assoclen = (assoclen + cryptlen) - cryptlen */
+ MATHB(p, SEQOUTSZ, SUB, MATH0, MATH2, CAAM_CMD_SZ, 0);
+ MATHB(p, MATH3, SUB, MATH2, VSEQINSZ, CAAM_CMD_SZ, 0);
+
+ /* read assoc before reading payload */
+ SEQFIFOLOAD(p, MSG2, 0 , VLF);
+
+ /* Prepare to read and write cryptlen bytes */
+ MATHB(p, ZERO, ADD, MATH2, VSEQINSZ, CAAM_CMD_SZ, 0);
+ MATHB(p, ZERO, ADD, MATH2, VSEQOUTSZ, CAAM_CMD_SZ, 0);
+
+ /*
+ * MOVE_LEN opcode is not available in all SEC HW revisions,
+ * thus need to do some magic, i.e. self-patch the descriptor buffer.
+ */
+ pread_move_cmd = MOVE(p, DESCBUF, 0, MATH2, 0, 6, IMMED);
+ pwrite_move_cmd = MOVE(p, MATH2, 0, DESCBUF, 0, 8, WAITCOMP | IMMED);
+
+ /* Read and write cryptlen bytes */
+ SEQFIFOSTORE(p, MSG, 0, 0, VLF);
+ SEQFIFOLOAD(p, MSGINSNOOP, 0, VLF | LAST1 | LAST2 | FLUSH1);
+
+ /*
+ * Insert a NOP here, since we need at least 4 instructions between
+ * code patching the descriptor buffer and the location being patched.
+ */
+ pnop_cmd = JUMP(p, nop_cmd, LOCAL_JUMP, ALL_TRUE, 0);
+ SET_LABEL(p, nop_cmd);
+
+ SET_LABEL(p, read_move_cmd);
+ SET_LABEL(p, write_move_cmd);
+ LOAD(p, 0, DCTRL, LDOFF_DISABLE_AUTO_NFIFO, 0, IMMED);
+ MOVE(p, IFIFOAB1, 0, OFIFO, 0, 0, IMMED);
+ LOAD(p, 0, DCTRL, LDOFF_ENABLE_AUTO_NFIFO, 0, IMMED);
+
+ /* Load ICV */
+ SEQFIFOLOAD(p, ICV2, icvsize, LAST2);
+
+ PATCH_JUMP(p, pskip_key_load, skip_key_load);
+ PATCH_JUMP(p, pnop_cmd, nop_cmd);
+ PATCH_MOVE(p, pread_move_cmd, read_move_cmd);
+ PATCH_MOVE(p, pwrite_move_cmd, write_move_cmd);
+
+ return PROGRAM_FINALIZE(p);
+}
+
+#endif /* __DESC_IPSEC_H__ */
--
1.8.3.1
Horia Geanta
2014-08-14 12:54:24 UTC
Permalink
1. fix HDR_START_IDX_MASK, HDR_SD_SHARE_MASK, HDR_JD_SHARE_MASK
Define HDR_START_IDX_MASK consistently with the other masks:
mask = bitmask << offset

2. OP_ALG_TYPE_CLASS1 and OP_ALG_TYPE_CLASS2 must be shifted.

3. fix FIFO_STORE output data type value for AFHA S-Box

4. fix OPERATION pkha modular arithmetic source mask

5. rename LDST_SRCDST_WORD_CLASS1_ICV_SZ to
LDST_SRCDST_WORD_CLASS1_IV_SZ (it refers to IV, not ICV).

Signed-off-by: Horia Geanta <***@freescale.com>
---
drivers/crypto/caam/desc.h | 16 ++++++++--------
1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/drivers/crypto/caam/desc.h b/drivers/crypto/caam/desc.h
index d397ff9d56fd..f891a67c4786 100644
--- a/drivers/crypto/caam/desc.h
+++ b/drivers/crypto/caam/desc.h
@@ -80,8 +80,8 @@ struct sec4_sg_entry {
#define HDR_ZRO 0x00008000

/* Start Index or SharedDesc Length */
-#define HDR_START_IDX_MASK 0x3f
#define HDR_START_IDX_SHIFT 16
+#define HDR_START_IDX_MASK (0x3f << HDR_START_IDX_SHIFT)

/* If shared descriptor header, 6-bit length */
#define HDR_DESCLEN_SHR_MASK 0x3f
@@ -111,10 +111,10 @@ struct sec4_sg_entry {
#define HDR_PROP_DNR 0x00000800

/* JobDesc/SharedDesc share property */
-#define HDR_SD_SHARE_MASK 0x03
#define HDR_SD_SHARE_SHIFT 8
-#define HDR_JD_SHARE_MASK 0x07
+#define HDR_SD_SHARE_MASK (0x03 << HDR_SD_SHARE_SHIFT)
#define HDR_JD_SHARE_SHIFT 8
+#define HDR_JD_SHARE_MASK (0x07 << HDR_JD_SHARE_SHIFT)

#define HDR_SHARE_NEVER (0x00 << HDR_SD_SHARE_SHIFT)
#define HDR_SHARE_WAIT (0x01 << HDR_SD_SHARE_SHIFT)
@@ -225,7 +225,7 @@ struct sec4_sg_entry {
#define LDST_SRCDST_WORD_DECO_MATH2 (0x0a << LDST_SRCDST_SHIFT)
#define LDST_SRCDST_WORD_DECO_AAD_SZ (0x0b << LDST_SRCDST_SHIFT)
#define LDST_SRCDST_WORD_DECO_MATH3 (0x0b << LDST_SRCDST_SHIFT)
-#define LDST_SRCDST_WORD_CLASS1_ICV_SZ (0x0c << LDST_SRCDST_SHIFT)
+#define LDST_SRCDST_WORD_CLASS1_IV_SZ (0x0c << LDST_SRCDST_SHIFT)
#define LDST_SRCDST_WORD_ALTDS_CLASS1 (0x0f << LDST_SRCDST_SHIFT)
#define LDST_SRCDST_WORD_PKHA_A_SZ (0x10 << LDST_SRCDST_SHIFT)
#define LDST_SRCDST_WORD_PKHA_B_SZ (0x11 << LDST_SRCDST_SHIFT)
@@ -390,7 +390,7 @@ struct sec4_sg_entry {
#define FIFOST_TYPE_PKHA_N (0x08 << FIFOST_TYPE_SHIFT)
#define FIFOST_TYPE_PKHA_A (0x0c << FIFOST_TYPE_SHIFT)
#define FIFOST_TYPE_PKHA_B (0x0d << FIFOST_TYPE_SHIFT)
-#define FIFOST_TYPE_AF_SBOX_JKEK (0x10 << FIFOST_TYPE_SHIFT)
+#define FIFOST_TYPE_AF_SBOX_JKEK (0x20 << FIFOST_TYPE_SHIFT)
#define FIFOST_TYPE_AF_SBOX_TKEK (0x21 << FIFOST_TYPE_SHIFT)
#define FIFOST_TYPE_PKHA_E_JKEK (0x22 << FIFOST_TYPE_SHIFT)
#define FIFOST_TYPE_PKHA_E_TKEK (0x23 << FIFOST_TYPE_SHIFT)
@@ -1095,8 +1095,8 @@ struct sec4_sg_entry {
/* For non-protocol/alg-only op commands */
#define OP_ALG_TYPE_SHIFT 24
#define OP_ALG_TYPE_MASK (0x7 << OP_ALG_TYPE_SHIFT)
-#define OP_ALG_TYPE_CLASS1 2
-#define OP_ALG_TYPE_CLASS2 4
+#define OP_ALG_TYPE_CLASS1 (2 << OP_ALG_TYPE_SHIFT)
+#define OP_ALG_TYPE_CLASS2 (4 << OP_ALG_TYPE_SHIFT)

#define OP_ALG_ALGSEL_SHIFT 16
#define OP_ALG_ALGSEL_MASK (0xff << OP_ALG_ALGSEL_SHIFT)
@@ -1237,7 +1237,7 @@ struct sec4_sg_entry {
#define OP_ALG_PKMODE_MOD_PRIMALITY 0x00f

/* PKHA mode copy-memory functions */
-#define OP_ALG_PKMODE_SRC_REG_SHIFT 13
+#define OP_ALG_PKMODE_SRC_REG_SHIFT 17
#define OP_ALG_PKMODE_SRC_REG_MASK (7 << OP_ALG_PKMODE_SRC_REG_SHIFT)
#define OP_ALG_PKMODE_DST_REG_SHIFT 10
#define OP_ALG_PKMODE_DST_REG_MASK (7 << OP_ALG_PKMODE_DST_REG_SHIFT)
--
1.8.3.1
Horia Geanta
2014-08-14 12:54:32 UTC
Permalink
Refactor descriptor creation in caamalg and caamhash, i.e.
create whole descriptors in the same place / function.
This makes the code more comprehensible and easier to maintain.

Signed-off-by: Horia Geanta <***@freescale.com>
---
drivers/crypto/caam/caamalg.c | 244 +++++++++++++++------------
drivers/crypto/caam/caamhash.c | 368 ++++++++++++++---------------------------
2 files changed, 262 insertions(+), 350 deletions(-)

diff --git a/drivers/crypto/caam/caamalg.c b/drivers/crypto/caam/caamalg.c
index cd1ba573c633..9090fc8c04e0 100644
--- a/drivers/crypto/caam/caamalg.c
+++ b/drivers/crypto/caam/caamalg.c
@@ -94,57 +94,6 @@
static struct list_head alg_list;
static const bool ps = (sizeof(dma_addr_t) == sizeof(u64));

-/* Set DK bit in class 1 operation if shared */
-static inline void append_dec_op1(struct program *p, u32 type)
-{
- LABEL(jump_cmd);
- REFERENCE(pjump_cmd);
- LABEL(uncond_jump_cmd);
- REFERENCE(puncond_jump_cmd);
-
- /* DK bit is valid only for AES */
- if ((type & OP_ALG_ALGSEL_MASK) != OP_ALG_ALGSEL_AES) {
- ALG_OPERATION(p, type & OP_ALG_ALGSEL_MASK,
- type & OP_ALG_AAI_MASK, OP_ALG_AS_INITFINAL,
- ICV_CHECK_DISABLE, DIR_DEC);
- return;
- }
-
- pjump_cmd = JUMP(p, jump_cmd, LOCAL_JUMP, ALL_TRUE, SHRD);
- ALG_OPERATION(p, type & OP_ALG_ALGSEL_MASK, type & OP_ALG_AAI_MASK,
- OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, DIR_DEC);
- puncond_jump_cmd = JUMP(p, uncond_jump_cmd, LOCAL_JUMP, ALL_TRUE, 0);
- SET_LABEL(p, jump_cmd);
- ALG_OPERATION(p, type & OP_ALG_ALGSEL_MASK,
- (type & OP_ALG_AAI_MASK) | OP_ALG_AAI_DK,
- OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, DIR_DEC);
- SET_LABEL(p, uncond_jump_cmd);
-
- PATCH_JUMP(p, pjump_cmd, jump_cmd);
- PATCH_JUMP(p, puncond_jump_cmd, uncond_jump_cmd);
-}
-
-/*
- * For aead encrypt and decrypt, read iv for both classes
- */
-static inline void aead_append_ld_iv(struct program *p, u32 ivsize)
-{
- SEQLOAD(p, CONTEXT1, 0, ivsize, 0);
- MOVE(p, CONTEXT1, 0, IFIFOAB2, 0, ivsize, IMMED);
-}
-
-/*
- * For ablkcipher encrypt and decrypt, read from req->src and
- * write to req->dst
- */
-static inline void ablkcipher_append_src_dst(struct program *p)
-{
- MATHB(p, SEQINSZ, ADD, MATH0, VSEQOUTSZ, 4, 0);
- MATHB(p, SEQINSZ, ADD, MATH0, VSEQINSZ, 4, 0);
- SEQFIFOLOAD(p, MSG1, 0, VLF | LAST1);
- SEQFIFOSTORE(p, MSG, 0, 0, VLF);
-}
-
/*
* If all data, including src (with assoc and iv) or dst (with iv only) are
* contiguous
@@ -173,39 +122,6 @@ struct caam_ctx {
unsigned int authsize;
};

-static void append_key_aead(struct program *p, struct caam_ctx *ctx,
- int keys_fit_inline)
-{
- if (keys_fit_inline) {
- KEY(p, MDHA_SPLIT_KEY, ENC, (uintptr_t)ctx->key,
- ctx->split_key_len, IMMED | COPY);
- KEY(p, KEY1, 0, (uintptr_t)(ctx->key + ctx->split_key_pad_len),
- ctx->enckeylen, IMMED | COPY);
- } else {
- KEY(p, MDHA_SPLIT_KEY, ENC, ctx->key_dma, ctx->split_key_len,
- 0);
- KEY(p, KEY1, 0, ctx->key_dma + ctx->split_key_pad_len,
- ctx->enckeylen, 0);
- }
-}
-
-static void init_sh_desc_key_aead(struct program *p, struct caam_ctx *ctx,
- int keys_fit_inline)
-{
- LABEL(key_jump_cmd);
- REFERENCE(pkey_jump_cmd);
-
- SHR_HDR(p, SHR_SERIAL, 1, 0);
-
- /* Skip if already shared */
- pkey_jump_cmd = JUMP(p, key_jump_cmd, LOCAL_JUMP, ALL_TRUE, SHRD);
-
- append_key_aead(p, ctx, keys_fit_inline);
-
- SET_LABEL(p, key_jump_cmd);
- PATCH_JUMP(p, pkey_jump_cmd, key_jump_cmd);
-}
-
static int aead_null_set_sh_desc(struct crypto_aead *aead)
{
struct aead_tfm *tfm = &aead->base.crt_aead;
@@ -419,6 +335,12 @@ static int aead_set_sh_desc(struct crypto_aead *aead)
struct program prg;
struct program *p = &prg;
unsigned desc_bytes;
+ LABEL(skip_key_load);
+ REFERENCE(pskip_key_load);
+ LABEL(set_dk);
+ REFERENCE(pset_dk);
+ LABEL(skip_dk);
+ REFERENCE(pskip_dk);

if (!ctx->authsize)
return 0;
@@ -442,7 +364,24 @@ static int aead_set_sh_desc(struct crypto_aead *aead)
if (ps)
PROGRAM_SET_36BIT_ADDR(p);

- init_sh_desc_key_aead(p, ctx, keys_fit_inline);
+ SHR_HDR(p, SHR_SERIAL, 1, 0);
+
+ /* Skip key loading if already shared */
+ pskip_key_load = JUMP(p, skip_key_load, LOCAL_JUMP, ALL_TRUE, SHRD);
+
+ if (keys_fit_inline) {
+ KEY(p, MDHA_SPLIT_KEY, ENC, (uintptr_t)ctx->key,
+ ctx->split_key_len, IMMED | COPY);
+ KEY(p, KEY1, 0, (uintptr_t)(ctx->key + ctx->split_key_pad_len),
+ ctx->enckeylen, IMMED | COPY);
+ } else {
+ KEY(p, MDHA_SPLIT_KEY, ENC, ctx->key_dma, ctx->split_key_len,
+ 0);
+ KEY(p, KEY1, 0, ctx->key_dma + ctx->split_key_pad_len,
+ ctx->enckeylen, 0);
+ }
+
+ SET_LABEL(p, skip_key_load);

/* Class 2 operation */
ALG_OPERATION(p, ctx->class2_alg_type & OP_ALG_ALGSEL_MASK,
@@ -460,7 +399,10 @@ static int aead_set_sh_desc(struct crypto_aead *aead)

/* read assoc before reading payload */
SEQFIFOLOAD(p, MSG2, 0 , VLF);
- aead_append_ld_iv(p, tfm->ivsize);
+
+ /* read iv for both classes */
+ SEQLOAD(p, CONTEXT1, 0, tfm->ivsize, 0);
+ MOVE(p, CONTEXT1, 0, IFIFOAB2, 0, tfm->ivsize, IMMED);

/* Class 1 operation */
ALG_OPERATION(p, ctx->class1_alg_type & OP_ALG_ALGSEL_MASK,
@@ -478,6 +420,8 @@ static int aead_set_sh_desc(struct crypto_aead *aead)
/* Write ICV */
SEQSTORE(p, CONTEXT2, 0, ctx->authsize, 0);

+ PATCH_JUMP(p, pskip_key_load, skip_key_load);
+
PROGRAM_FINALIZE(p);

desc_bytes = DESC_BYTES(desc);
@@ -508,7 +452,25 @@ static int aead_set_sh_desc(struct crypto_aead *aead)
if (ps)
PROGRAM_SET_36BIT_ADDR(p);

- init_sh_desc_key_aead(p, ctx, keys_fit_inline);
+ /* aead_decrypt shared descriptor */
+ SHR_HDR(p, SHR_SERIAL, 1, 0);
+
+ /* Skip key loading if already shared */
+ pskip_key_load = JUMP(p, skip_key_load, LOCAL_JUMP, ALL_TRUE, SHRD);
+
+ if (keys_fit_inline) {
+ KEY(p, MDHA_SPLIT_KEY, ENC, (uintptr_t)ctx->key,
+ ctx->split_key_len, IMMED | COPY);
+ KEY(p, KEY1, 0, (uintptr_t)(ctx->key + ctx->split_key_pad_len),
+ ctx->enckeylen, IMMED | COPY);
+ } else {
+ KEY(p, MDHA_SPLIT_KEY, ENC, ctx->key_dma, ctx->split_key_len,
+ 0);
+ KEY(p, KEY1, 0, ctx->key_dma + ctx->split_key_pad_len,
+ ctx->enckeylen, 0);
+ }
+
+ SET_LABEL(p, skip_key_load);

/* Class 2 operation */
ALG_OPERATION(p, ctx->class2_alg_type & OP_ALG_ALGSEL_MASK,
@@ -525,9 +487,28 @@ static int aead_set_sh_desc(struct crypto_aead *aead)
/* read assoc before reading payload */
SEQFIFOLOAD(p, MSG2, 0 , VLF);

- aead_append_ld_iv(p, tfm->ivsize);
-
- append_dec_op1(p, ctx->class1_alg_type);
+ /* read iv for both classes */
+ SEQLOAD(p, CONTEXT1, 0, tfm->ivsize, 0);
+ MOVE(p, CONTEXT1, 0, IFIFOAB2, 0, tfm->ivsize, IMMED);
+
+ /* Set DK bit in class 1 operation if shared (AES only) */
+ if ((ctx->class1_alg_type & OP_ALG_ALGSEL_MASK) == OP_ALG_ALGSEL_AES) {
+ pset_dk = JUMP(p, set_dk, LOCAL_JUMP, ALL_TRUE, SHRD);
+ ALG_OPERATION(p, ctx->class1_alg_type & OP_ALG_ALGSEL_MASK,
+ ctx->class1_alg_type & OP_ALG_AAI_MASK,
+ OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, DIR_DEC);
+ pskip_dk = JUMP(p, skip_dk, LOCAL_JUMP, ALL_TRUE, 0);
+ SET_LABEL(p, set_dk);
+ ALG_OPERATION(p, ctx->class1_alg_type & OP_ALG_ALGSEL_MASK,
+ (ctx->class1_alg_type & OP_ALG_AAI_MASK) |
+ OP_ALG_AAI_DK, OP_ALG_AS_INITFINAL,
+ ICV_CHECK_DISABLE, DIR_DEC);
+ SET_LABEL(p, skip_dk);
+ } else {
+ ALG_OPERATION(p, ctx->class1_alg_type & OP_ALG_ALGSEL_MASK,
+ ctx->class1_alg_type & OP_ALG_AAI_MASK,
+ OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, DIR_DEC);
+ }

/* Read and write cryptlen bytes */
MATHB(p, ZERO, ADD, MATH2, VSEQINSZ, CAAM_CMD_SZ, 0);
@@ -540,6 +521,10 @@ static int aead_set_sh_desc(struct crypto_aead *aead)
/* Load ICV */
SEQFIFOLOAD(p, ICV2, ctx->authsize, LAST2);

+ PATCH_JUMP(p, pskip_key_load, skip_key_load);
+ PATCH_JUMP(p, pset_dk, set_dk);
+ PATCH_JUMP(p, pskip_dk, skip_dk);
+
PROGRAM_FINALIZE(p);

desc_bytes = DESC_BYTES(desc);
@@ -570,7 +555,25 @@ static int aead_set_sh_desc(struct crypto_aead *aead)
if (ps)
PROGRAM_SET_36BIT_ADDR(p);

- init_sh_desc_key_aead(p, ctx, keys_fit_inline);
+ SHR_HDR(p, SHR_SERIAL, 1, 0);
+
+ /* Skip key loading if already shared */
+ pskip_key_load = JUMP(p, skip_key_load, LOCAL_JUMP, ALL_TRUE, SHRD);
+
+ if (keys_fit_inline) {
+ KEY(p, MDHA_SPLIT_KEY, ENC, (uintptr_t)ctx->key,
+ ctx->split_key_len, IMMED | COPY);
+ KEY(p, KEY1, 0,
+ (uintptr_t)(ctx->key + ctx->split_key_pad_len),
+ ctx->enckeylen, IMMED | COPY);
+ } else {
+ KEY(p, MDHA_SPLIT_KEY, ENC, ctx->key_dma, ctx->split_key_len,
+ 0);
+ KEY(p, KEY1, 0, ctx->key_dma + ctx->split_key_pad_len,
+ ctx->enckeylen, 0);
+ }
+
+ SET_LABEL(p, skip_key_load);

/* Generate IV */
geniv = NFIFOENTRY_STYPE_PAD | NFIFOENTRY_DEST_DECO |
@@ -625,6 +628,8 @@ static int aead_set_sh_desc(struct crypto_aead *aead)
/* Write ICV */
SEQSTORE(p, CONTEXT2, 0, ctx->authsize, 0);

+ PATCH_JUMP(p, pskip_key_load, skip_key_load);
+
PROGRAM_FINALIZE(p);

desc_bytes = DESC_BYTES(desc);
@@ -735,8 +740,12 @@ static int ablkcipher_setkey(struct crypto_ablkcipher *ablkcipher,
struct program prg;
struct program *p = &prg;
unsigned desc_bytes;
- LABEL(key_jump_cmd);
- REFERENCE(pkey_jump_cmd);
+ LABEL(skip_key_load);
+ REFERENCE(pskip_key_load);
+ LABEL(set_dk);
+ REFERENCE(pset_dk);
+ LABEL(skip_dk);
+ REFERENCE(pskip_dk);

#ifdef DEBUG
print_hex_dump(KERN_ERR, "key in @"__stringify(__LINE__)": ",
@@ -759,13 +768,14 @@ static int ablkcipher_setkey(struct crypto_ablkcipher *ablkcipher,
PROGRAM_SET_36BIT_ADDR(p);

SHR_HDR(p, SHR_SERIAL, 1, 0);
- /* Skip if already shared */
- pkey_jump_cmd = JUMP(p, key_jump_cmd, LOCAL_JUMP, ALL_TRUE, SHRD);
+
+ /* Skip key loading if already shared */
+ pskip_key_load = JUMP(p, skip_key_load, LOCAL_JUMP, ALL_TRUE, SHRD);

/* Load class1 key only */
KEY(p, KEY1, 0, (uintptr_t)ctx->key, ctx->enckeylen, IMMED | COPY);

- SET_LABEL(p, key_jump_cmd);
+ SET_LABEL(p, skip_key_load);

/* Load IV */
SEQLOAD(p, CONTEXT1, 0, tfm->ivsize, 0);
@@ -776,9 +786,12 @@ static int ablkcipher_setkey(struct crypto_ablkcipher *ablkcipher,
OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, DIR_ENC);

/* Perform operation */
- ablkcipher_append_src_dst(p);
+ MATHB(p, SEQINSZ, ADD, MATH0, VSEQOUTSZ, 4, 0);
+ MATHB(p, SEQINSZ, ADD, MATH0, VSEQINSZ, 4, 0);
+ SEQFIFOLOAD(p, MSG1, 0, VLF | LAST1);
+ SEQFIFOSTORE(p, MSG, 0, 0, VLF);

- PATCH_JUMP(p, pkey_jump_cmd, key_jump_cmd);
+ PATCH_JUMP(p, pskip_key_load, skip_key_load);

PROGRAM_FINALIZE(p);

@@ -803,24 +816,45 @@ static int ablkcipher_setkey(struct crypto_ablkcipher *ablkcipher,

SHR_HDR(p, SHR_SERIAL, 1, 0);

- /* Skip if already shared */
- pkey_jump_cmd = JUMP(p, key_jump_cmd, LOCAL_JUMP, ALL_TRUE, SHRD);
+ /* Skip key loading if already shared */
+ pskip_key_load = JUMP(p, skip_key_load, LOCAL_JUMP, ALL_TRUE, SHRD);

/* Load class1 key only */
KEY(p, KEY1, 0, (uintptr_t)ctx->key, ctx->enckeylen, IMMED | COPY);

- SET_LABEL(p, key_jump_cmd);
+ SET_LABEL(p, skip_key_load);

/* load IV */
SEQLOAD(p, CONTEXT1, 0, tfm->ivsize, 0);

- /* Choose operation */
- append_dec_op1(p, ctx->class1_alg_type);
+ /* Set DK bit in class 1 operation if shared (AES only) */
+ if ((ctx->class1_alg_type & OP_ALG_ALGSEL_MASK) == OP_ALG_ALGSEL_AES) {
+ pset_dk = JUMP(p, set_dk, LOCAL_JUMP, ALL_TRUE, SHRD);
+ ALG_OPERATION(p, ctx->class1_alg_type & OP_ALG_ALGSEL_MASK,
+ ctx->class1_alg_type & OP_ALG_AAI_MASK,
+ OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, DIR_DEC);
+ pskip_dk = JUMP(p, skip_dk, LOCAL_JUMP, ALL_TRUE, 0);
+ SET_LABEL(p, set_dk);
+ ALG_OPERATION(p, ctx->class1_alg_type & OP_ALG_ALGSEL_MASK,
+ (ctx->class1_alg_type & OP_ALG_AAI_MASK) |
+ OP_ALG_AAI_DK, OP_ALG_AS_INITFINAL,
+ ICV_CHECK_DISABLE, DIR_DEC);
+ SET_LABEL(p, skip_dk);
+ } else {
+ ALG_OPERATION(p, ctx->class1_alg_type & OP_ALG_ALGSEL_MASK,
+ ctx->class1_alg_type & OP_ALG_AAI_MASK,
+ OP_ALG_AS_INITFINAL, ICV_CHECK_DISABLE, DIR_DEC);
+ }

/* Perform operation */
- ablkcipher_append_src_dst(p);
+ MATHB(p, SEQINSZ, ADD, MATH0, VSEQOUTSZ, 4, 0);
+ MATHB(p, SEQINSZ, ADD, MATH0, VSEQINSZ, 4, 0);
+ SEQFIFOLOAD(p, MSG1, 0, VLF | LAST1);
+ SEQFIFOSTORE(p, MSG, 0, 0, VLF);

- PATCH_JUMP(p, pkey_jump_cmd, key_jump_cmd);
+ PATCH_JUMP(p, pskip_key_load, skip_key_load);
+ PATCH_JUMP(p, pset_dk, set_dk);
+ PATCH_JUMP(p, pskip_dk, skip_dk);

PROGRAM_FINALIZE(p);

diff --git a/drivers/crypto/caam/caamhash.c b/drivers/crypto/caam/caamhash.c
index 529e3ca92406..0e5d7ef33ff8 100644
--- a/drivers/crypto/caam/caamhash.c
+++ b/drivers/crypto/caam/caamhash.c
@@ -137,36 +137,6 @@ struct caam_hash_state {

/* Common job descriptor seq in/out ptr routines */

-/* Map state->caam_ctx, and append seq_out_ptr command that points to it */
-static inline int map_seq_out_ptr_ctx(struct program *p, struct device *jrdev,
- struct caam_hash_state *state,
- int ctx_len)
-{
- state->ctx_dma = dma_map_single(jrdev, state->caam_ctx,
- ctx_len, DMA_FROM_DEVICE);
- if (dma_mapping_error(jrdev, state->ctx_dma)) {
- dev_err(jrdev, "unable to map ctx\n");
- return -ENOMEM;
- }
-
- SEQOUTPTR(p, state->ctx_dma, ctx_len, EXT);
-
- return 0;
-}
-
-/* Map req->result, and append seq_out_ptr command that points to it */
-static inline dma_addr_t map_seq_out_ptr_result(struct program *p,
- struct device *jrdev,
- u8 *result, int digestsize)
-{
- dma_addr_t dst_dma;
-
- dst_dma = dma_map_single(jrdev, result, digestsize, DMA_FROM_DEVICE);
- SEQOUTPTR(p, dst_dma, digestsize, EXT);
-
- return dst_dma;
-}
-
/* Map current buffer in state and put it in link table */
static inline dma_addr_t buf_map_to_sec4_sg(struct device *jrdev,
struct sec4_sg_entry *sec4_sg,
@@ -225,64 +195,46 @@ static inline int ctx_map_to_sec4_sg(u32 *desc, struct device *jrdev,
return 0;
}

-/* Common shared descriptor commands */
-static inline void append_key_ahash(struct program *p,
- struct caam_hash_ctx *ctx)
+/*
+ * For ahash update, final and finup (import_ctx = true)
+ * import context, read and write to seqout
+ * For ahash firsts and digest (import_ctx = false)
+ * read and write to seqout
+ */
+static inline void ahash_gen_sh_desc(u32 *desc, u32 state, int digestsize,
+ struct caam_hash_ctx *ctx, bool import_ctx)
{
- KEY(p, MDHA_SPLIT_KEY, ENC, (uintptr_t)ctx->key,
- ctx->split_key_len, IMMED | COPY);
-}
+ u32 op = ctx->alg_type;
+ struct program prg;
+ struct program *p = &prg;
+ LABEL(skip_key_load);
+ REFERENCE(pskip_key_load);

-/* Append key if it has been set */
-static inline void init_sh_desc_key_ahash(struct program *p,
- struct caam_hash_ctx *ctx)
-{
- LABEL(key_jump_cmd);
- REFERENCE(pkey_jump_cmd);
+ PROGRAM_CNTXT_INIT(p, desc, 0);
+ if (ps)
+ PROGRAM_SET_36BIT_ADDR(p);

SHR_HDR(p, SHR_SERIAL, 1, 0);

- if (ctx->split_key_len) {
- /* Skip if already shared */
- pkey_jump_cmd = JUMP(p, key_jump_cmd, LOCAL_JUMP, ALL_TRUE,
- SHRD);
+ /* Append key if it has been set; ahash update excluded */
+ if ((state != OP_ALG_AS_UPDATE) && (ctx->split_key_len)) {
+ /* Skip key loading if already shared */
+ pskip_key_load = JUMP(p, skip_key_load, LOCAL_JUMP, ALL_TRUE,
+ SHRD);

- append_key_ahash(p, ctx);
+ KEY(p, MDHA_SPLIT_KEY, ENC, (uintptr_t)ctx->key,
+ ctx->split_key_len, IMMED | COPY);

- SET_LABEL(p, key_jump_cmd);
+ SET_LABEL(p, skip_key_load);

- PATCH_JUMP(p, pkey_jump_cmd, key_jump_cmd);
- }
-}
+ PATCH_JUMP(p, pskip_key_load, skip_key_load);

-/*
- * For ahash read data from seqin following state->caam_ctx,
- * and write resulting class2 context to seqout, which may be state->caam_ctx
- * or req->result
- */
-static inline void ahash_append_load_str(struct program *p, int digestsize)
-{
- /* Calculate remaining bytes to read */
- MATHB(p, SEQINSZ, ADD, MATH0, VSEQINSZ, CAAM_CMD_SZ, 0);
-
- /* Read remaining bytes */
- SEQFIFOLOAD(p, MSG2, 0, VLF | LAST2);
-
- /* Store class2 context bytes */
- SEQSTORE(p, CONTEXT2, 0, digestsize, 0);
-}
-
-/*
- * For ahash update, final and finup, import context, read and write to seqout
- */
-static inline void ahash_ctx_data_to_out(struct program *p, u32 op, u32 state,
- int digestsize,
- struct caam_hash_ctx *ctx)
-{
- init_sh_desc_key_ahash(p, ctx);
+ op |= OP_ALG_AAI_HMAC_PRECOMP;
+ }

- /* Import context from software */
- SEQLOAD(p, CONTEXT2, 0, ctx->ctx_len, 0);
+ /* If needed, import context from software */
+ if (import_ctx)
+ SEQLOAD(p, CONTEXT2, 0, ctx->ctx_len, 0);

/* Class 2 operation */
ALG_OPERATION(p, op & OP_ALG_ALGSEL_MASK, op & OP_ALG_AAI_MASK, state,
@@ -290,24 +242,15 @@ static inline void ahash_ctx_data_to_out(struct program *p, u32 op, u32 state,

/*
* Load from buf and/or src and write to req->result or state->context
+ * Calculate remaining bytes to read
*/
- ahash_append_load_str(p, digestsize);
-}
-
-/* For ahash firsts and digest, read and write to seqout */
-static inline void ahash_data_to_out(struct program *p, u32 op, u32 state,
- int digestsize, struct caam_hash_ctx *ctx)
-{
- init_sh_desc_key_ahash(p, ctx);
-
- /* Class 2 operation */
- ALG_OPERATION(p, op & OP_ALG_ALGSEL_MASK, op & OP_ALG_AAI_MASK, state,
- ICV_CHECK_DISABLE, DIR_ENC);
+ MATHB(p, SEQINSZ, ADD, MATH0, VSEQINSZ, CAAM_CMD_SZ, 0);
+ /* Read remaining bytes */
+ SEQFIFOLOAD(p, MSG2, 0, VLF | LAST2);
+ /* Store class2 context bytes */
+ SEQSTORE(p, CONTEXT2, 0, digestsize, 0);

- /*
- * Load from buf and/or src and write to req->result or state->context
- */
- ahash_append_load_str(p, digestsize);
+ PROGRAM_FINALIZE(p);
}

static int ahash_set_sh_desc(struct crypto_ahash *ahash)
@@ -315,35 +258,11 @@ static int ahash_set_sh_desc(struct crypto_ahash *ahash)
struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash);
int digestsize = crypto_ahash_digestsize(ahash);
struct device *jrdev = ctx->jrdev;
- u32 have_key = 0;
u32 *desc;
- struct program prg;
- struct program *p = &prg;
-
- if (ctx->split_key_len)
- have_key = OP_ALG_AAI_HMAC_PRECOMP;

/* ahash_update shared descriptor */
desc = ctx->sh_desc_update;
- PROGRAM_CNTXT_INIT(p, desc, 0);
- if (ps)
- PROGRAM_SET_36BIT_ADDR(p);
-
- SHR_HDR(p, SHR_SERIAL, 1, 0);
-
- /* Import context from software */
- SEQLOAD(p, CONTEXT2, 0, ctx->ctx_len, 0);
-
- /* Class 2 operation */
- ALG_OPERATION(p, ctx->alg_type & OP_ALG_ALGSEL_MASK,
- ctx->alg_type & OP_ALG_AAI_MASK, OP_ALG_AS_UPDATE,
- ICV_CHECK_DISABLE, DIR_ENC);
-
- /* Load data and write to result or context */
- ahash_append_load_str(p, ctx->ctx_len);
-
- PROGRAM_FINALIZE(p);
-
+ ahash_gen_sh_desc(desc, OP_ALG_AS_UPDATE, ctx->ctx_len, ctx, true);
ctx->sh_desc_update_dma = dma_map_single(jrdev, desc, DESC_BYTES(desc),
DMA_TO_DEVICE);
if (dma_mapping_error(jrdev, ctx->sh_desc_update_dma)) {
@@ -358,15 +277,7 @@ static int ahash_set_sh_desc(struct crypto_ahash *ahash)

/* ahash_update_first shared descriptor */
desc = ctx->sh_desc_update_first;
- PROGRAM_CNTXT_INIT(p, desc, 0);
- if (ps)
- PROGRAM_SET_36BIT_ADDR(p);
-
- ahash_data_to_out(p, have_key | ctx->alg_type, OP_ALG_AS_INIT,
- ctx->ctx_len, ctx);
-
- PROGRAM_FINALIZE(p);
-
+ ahash_gen_sh_desc(desc, OP_ALG_AS_INIT, ctx->ctx_len, ctx, false);
ctx->sh_desc_update_first_dma = dma_map_single(jrdev, desc,
DESC_BYTES(desc),
DMA_TO_DEVICE);
@@ -382,15 +293,7 @@ static int ahash_set_sh_desc(struct crypto_ahash *ahash)

/* ahash_final shared descriptor */
desc = ctx->sh_desc_fin;
- PROGRAM_CNTXT_INIT(p, desc, 0);
- if (ps)
- PROGRAM_SET_36BIT_ADDR(p);
-
- ahash_ctx_data_to_out(p, have_key | ctx->alg_type, OP_ALG_AS_FINALIZE,
- digestsize, ctx);
-
- PROGRAM_FINALIZE(p);
-
+ ahash_gen_sh_desc(desc, OP_ALG_AS_FINALIZE, digestsize, ctx, true);
ctx->sh_desc_fin_dma = dma_map_single(jrdev, desc, DESC_BYTES(desc),
DMA_TO_DEVICE);
if (dma_mapping_error(jrdev, ctx->sh_desc_fin_dma)) {
@@ -404,15 +307,7 @@ static int ahash_set_sh_desc(struct crypto_ahash *ahash)

/* ahash_finup shared descriptor */
desc = ctx->sh_desc_finup;
- PROGRAM_CNTXT_INIT(p, desc, 0);
- if (ps)
- PROGRAM_SET_36BIT_ADDR(p);
-
- ahash_ctx_data_to_out(p, have_key | ctx->alg_type, OP_ALG_AS_FINALIZE,
- digestsize, ctx);
-
- PROGRAM_FINALIZE(p);
-
+ ahash_gen_sh_desc(desc, OP_ALG_AS_FINALIZE, digestsize, ctx, true);
ctx->sh_desc_finup_dma = dma_map_single(jrdev, desc, DESC_BYTES(desc),
DMA_TO_DEVICE);
if (dma_mapping_error(jrdev, ctx->sh_desc_finup_dma)) {
@@ -426,15 +321,7 @@ static int ahash_set_sh_desc(struct crypto_ahash *ahash)

/* ahash_digest shared descriptor */
desc = ctx->sh_desc_digest;
- PROGRAM_CNTXT_INIT(p, desc, 0);
- if (ps)
- PROGRAM_SET_36BIT_ADDR(p);
-
- ahash_data_to_out(p, have_key | ctx->alg_type, OP_ALG_AS_INITFINAL,
- digestsize, ctx);
-
- PROGRAM_FINALIZE(p);
-
+ ahash_gen_sh_desc(desc, OP_ALG_AS_INITFINAL, digestsize, ctx, false);
ctx->sh_desc_digest_dma = dma_map_single(jrdev, desc, DESC_BYTES(desc),
DMA_TO_DEVICE);
if (dma_mapping_error(jrdev, ctx->sh_desc_digest_dma)) {
@@ -891,7 +778,6 @@ static int ahash_update_ctx(struct ahash_request *req)
SEQINPTR(p, edesc->sec4_sg_dma, ctx->ctx_len + to_hash,
SGF | EXT);
SEQOUTPTR(p, state->ctx_dma, ctx->ctx_len, EXT);
-
PROGRAM_FINALIZE(p);

#ifdef DEBUG
@@ -956,17 +842,17 @@ static int ahash_final_ctx(struct ahash_request *req)
return -ENOMEM;
}

- sh_len = DESC_LEN(sh_desc);
desc = edesc->hw_desc;
- PROGRAM_CNTXT_INIT(p, desc, sh_len);
- if (ps)
- PROGRAM_SET_36BIT_ADDR(p);
-
- JOB_HDR(p, SHR_DEFER, sh_len, ptr, REO | SHR);
-
edesc->sec4_sg_bytes = sec4_sg_bytes;
edesc->sec4_sg = (void *)edesc + sizeof(struct ahash_edesc) +
DESC_JOB_IO_LEN;
+ edesc->dst_dma = dma_map_single(jrdev, req->result, digestsize,
+ DMA_FROM_DEVICE);
+ if (dma_mapping_error(jrdev, edesc->dst_dma)) {
+ dev_err(jrdev, "unable to map dst\n");
+ return -ENOMEM;
+ }
+
edesc->src_nents = 0;

ret = ctx_map_to_sec4_sg(desc, jrdev, state, ctx->ctx_len,
@@ -979,6 +865,12 @@ static int ahash_final_ctx(struct ahash_request *req)
last_buflen);
(edesc->sec4_sg + sec4_sg_bytes - 1)->len |= SEC4_SG_LEN_FIN;

+ sh_len = DESC_LEN(sh_desc);
+ PROGRAM_CNTXT_INIT(p, desc, sh_len);
+ if (ps)
+ PROGRAM_SET_36BIT_ADDR(p);
+ JOB_HDR(p, SHR_DEFER, sh_len, ptr, REO | SHR);
+
edesc->sec4_sg_dma = dma_map_single(jrdev, edesc->sec4_sg,
sec4_sg_bytes, DMA_TO_DEVICE);
if (dma_mapping_error(jrdev, edesc->sec4_sg_dma)) {
@@ -987,14 +879,7 @@ static int ahash_final_ctx(struct ahash_request *req)
}

SEQINPTR(p, edesc->sec4_sg_dma, ctx->ctx_len + buflen, SGF | EXT);
-
- edesc->dst_dma = map_seq_out_ptr_result(p, jrdev, req->result,
- digestsize);
- if (dma_mapping_error(jrdev, edesc->dst_dma)) {
- dev_err(jrdev, "unable to map dst\n");
- return -ENOMEM;
- }
-
+ SEQOUTPTR(p, edesc->dst_dma, digestsize, EXT);
PROGRAM_FINALIZE(p);

#ifdef DEBUG
@@ -1050,19 +935,18 @@ static int ahash_finup_ctx(struct ahash_request *req)
return -ENOMEM;
}

- sh_len = DESC_LEN(sh_desc);
desc = edesc->hw_desc;
- PROGRAM_CNTXT_INIT(p, desc, sh_len);
- if (ps)
- PROGRAM_SET_36BIT_ADDR(p);
-
- JOB_HDR(p, SHR_DEFER, sh_len, ptr, REO | SHR);
-
edesc->src_nents = src_nents;
edesc->chained = chained;
edesc->sec4_sg_bytes = sec4_sg_bytes;
edesc->sec4_sg = (void *)edesc + sizeof(struct ahash_edesc) +
DESC_JOB_IO_LEN;
+ edesc->dst_dma = dma_map_single(jrdev, req->result, digestsize,
+ DMA_FROM_DEVICE);
+ if (dma_mapping_error(jrdev, edesc->dst_dma)) {
+ dev_err(jrdev, "unable to map dst\n");
+ return -ENOMEM;
+ }

ret = ctx_map_to_sec4_sg(desc, jrdev, state, ctx->ctx_len,
edesc->sec4_sg, DMA_TO_DEVICE);
@@ -1076,6 +960,12 @@ static int ahash_finup_ctx(struct ahash_request *req)
src_map_to_sec4_sg(jrdev, req->src, src_nents, edesc->sec4_sg +
sec4_sg_src_index, chained);

+ sh_len = DESC_LEN(sh_desc);
+ PROGRAM_CNTXT_INIT(p, desc, sh_len);
+ if (ps)
+ PROGRAM_SET_36BIT_ADDR(p);
+ JOB_HDR(p, SHR_DEFER, sh_len, ptr, REO | SHR);
+
edesc->sec4_sg_dma = dma_map_single(jrdev, edesc->sec4_sg,
sec4_sg_bytes, DMA_TO_DEVICE);
if (dma_mapping_error(jrdev, edesc->sec4_sg_dma)) {
@@ -1085,14 +975,7 @@ static int ahash_finup_ctx(struct ahash_request *req)

SEQINPTR(p, edesc->sec4_sg_dma, ctx->ctx_len + buflen + req->nbytes,
SGF | EXT);
-
- edesc->dst_dma = map_seq_out_ptr_result(p, jrdev, req->result,
- digestsize);
- if (dma_mapping_error(jrdev, edesc->dst_dma)) {
- dev_err(jrdev, "unable to map dst\n");
- return -ENOMEM;
- }
-
+ SEQOUTPTR(p, edesc->dst_dma, digestsize, EXT);
PROGRAM_FINALIZE(p);

#ifdef DEBUG
@@ -1146,17 +1029,16 @@ static int ahash_digest(struct ahash_request *req)
edesc->sec4_sg = (void *)edesc + sizeof(struct ahash_edesc) +
DESC_JOB_IO_LEN;
edesc->sec4_sg_bytes = sec4_sg_bytes;
+ edesc->dst_dma = dma_map_single(jrdev, req->result, digestsize,
+ DMA_FROM_DEVICE);
+ if (dma_mapping_error(jrdev, edesc->dst_dma)) {
+ dev_err(jrdev, "unable to map dst\n");
+ return -ENOMEM;
+ }
+
edesc->src_nents = src_nents;
edesc->chained = chained;
-
- sh_len = DESC_LEN(sh_desc);
desc = edesc->hw_desc;
- PROGRAM_CNTXT_INIT(p, desc, sh_len);
- if (ps)
- PROGRAM_SET_36BIT_ADDR(p);
-
- JOB_HDR(p, SHR_DEFER, sh_len, ptr, REO | SHR);
-
if (src_nents) {
sg_to_sec4_sg_last(req->src, src_nents, edesc->sec4_sg, 0);
edesc->sec4_sg_dma = dma_map_single(jrdev, edesc->sec4_sg,
@@ -1170,15 +1052,14 @@ static int ahash_digest(struct ahash_request *req)
} else {
src_dma = sg_dma_address(req->src);
}
- SEQINPTR(p, src_dma, req->nbytes, options);
-
- edesc->dst_dma = map_seq_out_ptr_result(p, jrdev, req->result,
- digestsize);
- if (dma_mapping_error(jrdev, edesc->dst_dma)) {
- dev_err(jrdev, "unable to map dst\n");
- return -ENOMEM;
- }

+ sh_len = DESC_LEN(sh_desc);
+ PROGRAM_CNTXT_INIT(p, desc, sh_len);
+ if (ps)
+ PROGRAM_SET_36BIT_ADDR(p);
+ JOB_HDR(p, SHR_DEFER, sh_len, ptr, REO | SHR);
+ SEQINPTR(p, src_dma, req->nbytes, options);
+ SEQOUTPTR(p, edesc->dst_dma, digestsize, EXT);
PROGRAM_FINALIZE(p);

#ifdef DEBUG
@@ -1225,33 +1106,30 @@ static int ahash_final_no_ctx(struct ahash_request *req)
return -ENOMEM;
}

- sh_len = DESC_LEN(sh_desc);
+ edesc->src_nents = 0;
desc = edesc->hw_desc;
- PROGRAM_CNTXT_INIT(p, desc, sh_len);
- if (ps)
- PROGRAM_SET_36BIT_ADDR(p);
-
- JOB_HDR(p, SHR_DEFER, sh_len, ptr, REO | SHR);
-
state->buf_dma = dma_map_single(jrdev, buf, buflen, DMA_TO_DEVICE);
if (dma_mapping_error(jrdev, state->buf_dma)) {
dev_err(jrdev, "unable to map src\n");
return -ENOMEM;
}

- SEQINPTR(p, state->buf_dma, buflen, EXT);
-
- edesc->dst_dma = map_seq_out_ptr_result(p, jrdev, req->result,
- digestsize);
+ edesc->dst_dma = dma_map_single(jrdev, req->result, digestsize,
+ DMA_FROM_DEVICE);
if (dma_mapping_error(jrdev, edesc->dst_dma)) {
dev_err(jrdev, "unable to map dst\n");
return -ENOMEM;
}

+ sh_len = DESC_LEN(sh_desc);
+ PROGRAM_CNTXT_INIT(p, desc, sh_len);
+ if (ps)
+ PROGRAM_SET_36BIT_ADDR(p);
+ JOB_HDR(p, SHR_DEFER, sh_len, ptr, REO | SHR);
+ SEQINPTR(p, state->buf_dma, buflen, EXT);
+ SEQOUTPTR(p, edesc->dst_dma, digestsize, EXT);
PROGRAM_FINALIZE(p);

- edesc->src_nents = 0;
-
#ifdef DEBUG
print_hex_dump(KERN_ERR, "jobdesc@"__stringify(__LINE__)": ",
DUMP_PREFIX_ADDRESS, 16, 4, desc, DESC_BYTES(desc), 1);
@@ -1314,6 +1192,7 @@ static int ahash_update_no_ctx(struct ahash_request *req)
return -ENOMEM;
}

+ desc = edesc->hw_desc;
edesc->src_nents = src_nents;
edesc->chained = chained;
edesc->sec4_sg_bytes = sec4_sg_bytes;
@@ -1331,12 +1210,17 @@ static int ahash_update_no_ctx(struct ahash_request *req)
state->current_buf = !state->current_buf;
}

+ state->ctx_dma = dma_map_single(jrdev, state->caam_ctx,
+ ctx->ctx_len, DMA_FROM_DEVICE);
+ if (dma_mapping_error(jrdev, state->ctx_dma)) {
+ dev_err(jrdev, "unable to map ctx\n");
+ return -ENOMEM;
+ }
+
sh_len = DESC_LEN(sh_desc);
- desc = edesc->hw_desc;
PROGRAM_CNTXT_INIT(p, desc, sh_len);
if (ps)
PROGRAM_SET_36BIT_ADDR(p);
-
JOB_HDR(p, SHR_DEFER, sh_len, ptr, REO | SHR);

edesc->sec4_sg_dma = dma_map_single(jrdev, edesc->sec4_sg,
@@ -1348,11 +1232,7 @@ static int ahash_update_no_ctx(struct ahash_request *req)
}

SEQINPTR(p, edesc->sec4_sg_dma, to_hash, SGF | EXT);
-
- ret = map_seq_out_ptr_ctx(p, jrdev, state, ctx->ctx_len);
- if (ret)
- return ret;
-
+ SEQOUTPTR(p, state->ctx_dma, ctx->ctx_len, EXT);
PROGRAM_FINALIZE(p);

#ifdef DEBUG
@@ -1425,19 +1305,18 @@ static int ahash_finup_no_ctx(struct ahash_request *req)
return -ENOMEM;
}

- sh_len = DESC_LEN(sh_desc);
desc = edesc->hw_desc;
- PROGRAM_CNTXT_INIT(p, desc, sh_len);
- if (ps)
- PROGRAM_SET_36BIT_ADDR(p);
-
- JOB_HDR(p, SHR_DEFER, sh_len, ptr, REO | SHR);
-
edesc->src_nents = src_nents;
edesc->chained = chained;
edesc->sec4_sg_bytes = sec4_sg_bytes;
edesc->sec4_sg = (void *)edesc + sizeof(struct ahash_edesc) +
DESC_JOB_IO_LEN;
+ edesc->dst_dma = dma_map_single(jrdev, req->result, digestsize,
+ DMA_FROM_DEVICE);
+ if (dma_mapping_error(jrdev, edesc->dst_dma)) {
+ dev_err(jrdev, "unable to map dst\n");
+ return -ENOMEM;
+ }

state->buf_dma = try_buf_map_to_sec4_sg(jrdev, edesc->sec4_sg, buf,
state->buf_dma, buflen,
@@ -1446,6 +1325,12 @@ static int ahash_finup_no_ctx(struct ahash_request *req)
src_map_to_sec4_sg(jrdev, req->src, src_nents, edesc->sec4_sg + 1,
chained);

+ sh_len = DESC_LEN(sh_desc);
+ PROGRAM_CNTXT_INIT(p, desc, sh_len);
+ if (ps)
+ PROGRAM_SET_36BIT_ADDR(p);
+ JOB_HDR(p, SHR_DEFER, sh_len, ptr, REO | SHR);
+
edesc->sec4_sg_dma = dma_map_single(jrdev, edesc->sec4_sg,
sec4_sg_bytes, DMA_TO_DEVICE);
if (dma_mapping_error(jrdev, edesc->sec4_sg_dma)) {
@@ -1454,14 +1339,7 @@ static int ahash_finup_no_ctx(struct ahash_request *req)
}

SEQINPTR(p, edesc->sec4_sg_dma, buflen + req->nbytes, SGF | EXT);
-
- edesc->dst_dma = map_seq_out_ptr_result(p, jrdev, req->result,
- digestsize);
- if (dma_mapping_error(jrdev, edesc->dst_dma)) {
- dev_err(jrdev, "unable to map dst\n");
- return -ENOMEM;
- }
-
+ SEQOUTPTR(p, edesc->dst_dma, digestsize, EXT);
PROGRAM_FINALIZE(p);

#ifdef DEBUG
@@ -1528,12 +1406,19 @@ static int ahash_update_first(struct ahash_request *req)
return -ENOMEM;
}

+ desc = edesc->hw_desc;
edesc->src_nents = src_nents;
edesc->chained = chained;
edesc->sec4_sg_bytes = sec4_sg_bytes;
edesc->sec4_sg = (void *)edesc + sizeof(struct ahash_edesc) +
DESC_JOB_IO_LEN;
edesc->dst_dma = 0;
+ state->ctx_dma = dma_map_single(jrdev, state->caam_ctx,
+ ctx->ctx_len, DMA_FROM_DEVICE);
+ if (dma_mapping_error(jrdev, state->ctx_dma)) {
+ dev_err(jrdev, "unable to map ctx\n");
+ return -ENOMEM;
+ }

if (src_nents) {
sg_to_sec4_sg_last(req->src, src_nents,
@@ -1556,19 +1441,12 @@ static int ahash_update_first(struct ahash_request *req)
sg_copy_part(next_buf, req->src, to_hash, req->nbytes);

sh_len = DESC_LEN(sh_desc);
- desc = edesc->hw_desc;
PROGRAM_CNTXT_INIT(p, desc, sh_len);
if (ps)
PROGRAM_SET_36BIT_ADDR(p);
-
JOB_HDR(p, SHR_DEFER, sh_len, ptr, REO | SHR);
-
SEQINPTR(p, src_dma, to_hash, options);
-
- ret = map_seq_out_ptr_ctx(p, jrdev, state, ctx->ctx_len);
- if (ret)
- return ret;
-
+ SEQOUTPTR(p, state->ctx_dma, ctx->ctx_len, EXT);
PROGRAM_FINALIZE(p);

#ifdef DEBUG
--
1.8.3.1
Horia Geanta
2014-08-14 12:54:34 UTC
Permalink
Add SGML template for generating RTA docbook.
Source code is in drivers/crypto/caam/flib

Cc: Randy Dunlap <***@infradead.org>
Signed-off-by: Horia Geanta <***@freescale.com>
---
Documentation/DocBook/Makefile | 3 +-
Documentation/DocBook/rta-api.tmpl | 261 ++++++++++++++++++++++
Documentation/DocBook/rta/.gitignore | 1 +
Documentation/DocBook/rta/Makefile | 5 +
Documentation/DocBook/rta/rta_arch.svg | 381 +++++++++++++++++++++++++++++++++
5 files changed, 650 insertions(+), 1 deletion(-)
create mode 100644 Documentation/DocBook/rta-api.tmpl
create mode 100644 Documentation/DocBook/rta/.gitignore
create mode 100644 Documentation/DocBook/rta/Makefile
create mode 100644 Documentation/DocBook/rta/rta_arch.svg

diff --git a/Documentation/DocBook/Makefile b/Documentation/DocBook/Makefile
index bec06659e0eb..f2917495db49 100644
--- a/Documentation/DocBook/Makefile
+++ b/Documentation/DocBook/Makefile
@@ -15,7 +15,7 @@ DOCBOOKS := z8530book.xml device-drivers.xml \
80211.xml debugobjects.xml sh.xml regulator.xml \
alsa-driver-api.xml writing-an-alsa-driver.xml \
tracepoint.xml drm.xml media_api.xml w1.xml \
- writing_musb_glue_layer.xml
+ writing_musb_glue_layer.xml rta-api.xml

include Documentation/DocBook/media/Makefile

@@ -53,6 +53,7 @@ htmldocs: $(HTML)
$(call build_main_index)
$(call build_images)
$(call install_media_images)
+ $(call install_rta_images)

MAN := $(patsubst %.xml, %.9, $(BOOKS))
mandocs: $(MAN)
diff --git a/Documentation/DocBook/rta-api.tmpl b/Documentation/DocBook/rta-api.tmpl
new file mode 100644
index 000000000000..90c5c5a8d9a7
--- /dev/null
+++ b/Documentation/DocBook/rta-api.tmpl
@@ -0,0 +1,261 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!DOCTYPE book PUBLIC "-//OASIS//DTD DocBook XML V4.1.2//EN"
+ "http://www.oasis-open.org/docbook/xml/4.1.2/docbookx.dtd" []>
+
+<book id="RTAapi">
+ <bookinfo>
+ <title>Writing descriptors for Freescale CAAM using RTA library</title>
+ <authorgroup>
+ <author>
+ <firstname>Horia</firstname>
+ <surname>Geanta</surname>
+ <affiliation>
+ <address><email>***@freescale.com</email></address>
+ </affiliation>
+ </author>
+ </authorgroup>
+
+ <copyright>
+ <year>2008-2014</year>
+ <holder>Freescale Semiconductor</holder>
+ </copyright>
+
+ <legalnotice>
+ <para>
+ This documentation is free software; you can redistribute
+ it and/or modify it under the terms of the GNU General Public
+ License as published by the Free Software Foundation; either
+ version 2 of the License, or (at your option) any later
+ version.
+ </para>
+
+ <para>
+ This program is distributed in the hope that it will be
+ useful, but WITHOUT ANY WARRANTY; without even the implied
+ warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
+ See the GNU General Public License for more details.
+ </para>
+
+ <para>
+ For more details see the file COPYING in the source
+ distribution of Linux.
+ </para>
+ </legalnotice>
+ </bookinfo>
+
+<toc></toc>
+
+ <chapter id="intro">
+ <title>Introduction</title>
+ <sect1>
+ <title>About</title>
+!Pdrivers/crypto/caam/flib/rta.h About
+!Pdrivers/crypto/caam/flib/rta.h Usage
+ <mediaobject>
+ <imageobject>
+ <imagedata fileref="rta_arch.svg" format="SVG" align="CENTER"/>
+ </imageobject>
+ <caption><para>RTA Integration Overview</para></caption>
+ </mediaobject>
+ </sect1>
+ <sect1>
+ <title>Using RTA</title>
+ <para>
+ RTA can be used in an application just by including the following header file:
+ #include flib/rta.h
+ </para>
+ <para>
+ The files in drivers/crypto/caam/desc directory contain several
+ real-world descriptors written with RTA. You can use them as-is or adapt
+ them to your needs.
+ </para>
+ <para>
+ RTA routines take as first parameter a pointer to a "struct program"
+ variable. It contains several housekeeping information that are used
+ during descriptor creation.
+ </para>
+ <para>
+ RTA creates the descriptors and saves them in buffers. It is the user's
+ job to allocate memory for these buffers before passing them to RTA
+ program initialization call.
+ </para>
+ <para>
+ A RTA program must start with a call to PROGRAM_CNTXT_INIT and end with
+ PROGRAM_FINALIZE. PROGRAM_CNTXT_INIT will initialze the members of
+ 'program' structure with user information (pointer to user's buffer, and
+ the SEC subversion). The PROGRAM_FINALIZE call checks the descriptor's
+ validity.
+ </para>
+ <para>
+ The program length is limited to the size of buffer descriptor which
+ can be maximum 64 words (256 bytes). However, a JUMP command can cause
+ loading and execution of another Job Descriptor; this allows for much
+ larger programs to be created.
+ </para>
+ </sect1>
+ <sect1>
+ <title>RTA components</title>
+ <para>
+ The content of the package is split mainly in two components:
+ <itemizedlist mark='opencircle'>
+ <listitem>
+ <para>descriptor builder API (drivers/crypto/caam/flib/rta.h)</para>
+ </listitem>
+ <listitem>
+ <para>
+ ready to use RTA descriptors
+ (drivers/crypto/caam/flib/desc/*.h)
+ </para>
+ </listitem>
+ </itemizedlist>
+ </para>
+ <para>
+ These are the main building blocks of descriptors:
+ <itemizedlist mark='opencircle'>
+ <listitem>
+ <para>buffer management: init &amp; finalize</para>
+ </listitem>
+ <listitem>
+ <para>SEC commands: MOVE, LOAD, FIFO_LOAD etc.</para>
+ </listitem>
+ <listitem>
+ <para>descriptor labels (e.g. used as JUMP destinations)</para>
+ </listitem>
+ <listitem>
+ <para>
+ utility commands: (e.g. PATCH_* commands that update labels and
+ references)
+ </para>
+ </listitem>
+ </itemizedlist>
+ </para>
+ <para>
+ In some cases, descriptor fields can't all be set when the commands are
+ inserted. These fields must be updated in a similar fashion to what the
+ linking process does with a binary file. RTA uses PATCH_* commands to
+ get relevant information and PROGRAM_FINALIZE to complete the
+ "code relocation".
+ </para>
+ <para>
+ If there is a need for descriptors larger than 64 words, their function
+ can be split into several smaller ones. In such case the smaller
+ descriptors are correlated and updated using PATCH_*_NON_LOCAL commands.
+ These calls must appear after all the descriptors are finalized and not
+ before as in a single descriptor case (the reason being that only then
+ references to all descriptors are available).
+ </para>
+ </sect1>
+ </chapter>
+
+ <chapter id="apiref">
+ <title>RTA API reference</title>
+ <sect1>
+ <title>Descriptor Buffer Management Routines</title>
+!Pdrivers/crypto/caam/flib/rta.h Descriptor Buffer Management Routines
+!Fdrivers/crypto/caam/flib/rta/sec_run_time_asm.h rta_sec_era
+!Fdrivers/crypto/caam/flib/rta/sec_run_time_asm.h USER_SEC_ERA
+!Fdrivers/crypto/caam/flib/rta/sec_run_time_asm.h INTL_SEC_ERA
+!Fdrivers/crypto/caam/flib/rta.h PROGRAM_CNTXT_INIT
+!Fdrivers/crypto/caam/flib/rta.h PROGRAM_FINALIZE
+!Fdrivers/crypto/caam/flib/rta.h PROGRAM_SET_36BIT_ADDR
+!Fdrivers/crypto/caam/flib/rta.h PROGRAM_SET_BSWAP
+!Fdrivers/crypto/caam/flib/rta.h WORD
+!Fdrivers/crypto/caam/flib/rta.h DWORD
+!Fdrivers/crypto/caam/flib/rta.h COPY_DATA
+!Fdrivers/crypto/caam/flib/rta.h DESC_LEN
+!Fdrivers/crypto/caam/flib/rta.h DESC_BYTES
+!Fdrivers/crypto/caam/flib/rta/sec_run_time_asm.h program
+ </sect1>
+ <sect1>
+ <title>SEC Commands Routines</title>
+!Pdrivers/crypto/caam/flib/rta.h SEC Commands Routines
+!Fdrivers/crypto/caam/flib/rta/sec_run_time_asm.h rta_share_type
+!Fdrivers/crypto/caam/flib/rta.h SHR_HDR
+!Fdrivers/crypto/caam/flib/rta.h JOB_HDR
+!Fdrivers/crypto/caam/flib/rta.h JOB_HDR_EXT
+!Fdrivers/crypto/caam/flib/rta.h MOVE
+!Fdrivers/crypto/caam/flib/rta.h MOVEB
+!Fdrivers/crypto/caam/flib/rta.h MOVEDW
+!Fdrivers/crypto/caam/flib/rta.h FIFOLOAD
+!Fdrivers/crypto/caam/flib/rta.h SEQFIFOLOAD
+!Fdrivers/crypto/caam/flib/rta.h FIFOSTORE
+!Fdrivers/crypto/caam/flib/rta.h SEQFIFOSTORE
+!Fdrivers/crypto/caam/flib/rta.h KEY
+!Fdrivers/crypto/caam/flib/rta.h SEQINPTR
+!Fdrivers/crypto/caam/flib/rta.h SEQOUTPTR
+!Fdrivers/crypto/caam/flib/rta.h ALG_OPERATION
+!Fdrivers/crypto/caam/flib/rta.h PROTOCOL
+!Fdrivers/crypto/caam/flib/rta.h PKHA_OPERATION
+!Fdrivers/crypto/caam/flib/rta/sec_run_time_asm.h rta_jump_cond
+!Fdrivers/crypto/caam/flib/rta/sec_run_time_asm.h rta_jump_type
+!Fdrivers/crypto/caam/flib/rta.h JUMP
+!Fdrivers/crypto/caam/flib/rta.h JUMP_INC
+!Fdrivers/crypto/caam/flib/rta.h JUMP_DEC
+!Fdrivers/crypto/caam/flib/rta.h LOAD
+!Fdrivers/crypto/caam/flib/rta.h SEQLOAD
+!Fdrivers/crypto/caam/flib/rta.h STORE
+!Fdrivers/crypto/caam/flib/rta.h SEQSTORE
+!Fdrivers/crypto/caam/flib/rta.h MATHB
+!Fdrivers/crypto/caam/flib/rta.h MATHI
+!Fdrivers/crypto/caam/flib/rta.h MATHU
+!Fdrivers/crypto/caam/flib/rta.h SIGNATURE
+!Fdrivers/crypto/caam/flib/rta.h NFIFOADD
+ </sect1>
+ <sect1>
+ <title>Self Referential Code Management Routines</title>
+!Pdrivers/crypto/caam/flib/rta.h Self Referential Code Management Routines
+!Fdrivers/crypto/caam/flib/rta.h REFERENCE
+!Fdrivers/crypto/caam/flib/rta.h LABEL
+!Fdrivers/crypto/caam/flib/rta.h SET_LABEL
+!Fdrivers/crypto/caam/flib/rta.h PATCH_JUMP
+!Fdrivers/crypto/caam/flib/rta.h PATCH_JUMP_NON_LOCAL
+!Fdrivers/crypto/caam/flib/rta.h PATCH_MOVE
+!Fdrivers/crypto/caam/flib/rta.h PATCH_MOVE_NON_LOCAL
+!Fdrivers/crypto/caam/flib/rta.h PATCH_LOAD
+!Fdrivers/crypto/caam/flib/rta.h PATCH_STORE
+!Fdrivers/crypto/caam/flib/rta.h PATCH_STORE_NON_LOCAL
+!Fdrivers/crypto/caam/flib/rta.h PATCH_RAW
+!Fdrivers/crypto/caam/flib/rta.h PATCH_RAW_NON_LOCAL
+ </sect1>
+ </chapter>
+
+ <chapter id="descapi">
+ <title>RTA descriptors library</title>
+ <sect1>
+ <title>Shared Descriptor Example Routines</title>
+ <sect2>
+ <title>Algorithms - Shared Descriptor Constructors</title>
+!Pdrivers/crypto/caam/flib/desc/algo.h Algorithms - Shared Descriptor Constructors
+!Fdrivers/crypto/caam/flib/desc/algo.h cnstr_shdsc_cbc_blkcipher
+ </sect2>
+ <sect2>
+ <title>IPsec Shared Descriptor Constructors</title>
+!Pdrivers/crypto/caam/flib/desc/ipsec.h IPsec Shared Descriptor Constructors
+!Fdrivers/crypto/caam/flib/desc/ipsec.h cnstr_shdsc_aead_encap
+!Fdrivers/crypto/caam/flib/desc/ipsec.h cnstr_shdsc_aead_givencap
+!Fdrivers/crypto/caam/flib/desc/ipsec.h cnstr_shdsc_aead_decap
+!Fdrivers/crypto/caam/flib/desc/ipsec.h cnstr_shdsc_aead_null_encap
+!Fdrivers/crypto/caam/flib/desc/ipsec.h cnstr_shdsc_aead_null_decap
+ </sect2>
+ </sect1>
+ <sect1>
+ <title>Job Descriptor Example Routines</title>
+!Pdrivers/crypto/caam/flib/desc/jobdesc.h Job Descriptor Constructors
+!Fdrivers/crypto/caam/flib/desc/jobdesc.h cnstr_jobdesc_mdsplitkey
+ </sect1>
+ <sect1>
+ <title>Auxiliary Data Structures</title>
+!Pdrivers/crypto/caam/flib/desc/common.h Shared Descriptor Constructors - shared structures
+!Fdrivers/crypto/caam/flib/desc/common.h alginfo
+!Fdrivers/crypto/caam/flib/desc/common.h protcmd
+ </sect1>
+ <sect1>
+ <title>Auxiliary Defines</title>
+!Fdrivers/crypto/caam/flib/desc/ipsec.h DESC_AEAD_ENC_LEN
+!Fdrivers/crypto/caam/flib/desc/ipsec.h DESC_AEAD_GIVENC_LEN
+!Fdrivers/crypto/caam/flib/desc/ipsec.h DESC_AEAD_DEC_LEN
+!Fdrivers/crypto/caam/flib/desc/ipsec.h DESC_AEAD_NULL_ENC_LEN
+!Fdrivers/crypto/caam/flib/desc/ipsec.h DESC_AEAD_NULL_DEC_LEN
+ </sect1>
+ </chapter>
+</book>
diff --git a/Documentation/DocBook/rta/.gitignore b/Documentation/DocBook/rta/.gitignore
new file mode 100644
index 000000000000..e461c585fde8
--- /dev/null
+++ b/Documentation/DocBook/rta/.gitignore
@@ -0,0 +1 @@
+!*.svg
diff --git a/Documentation/DocBook/rta/Makefile b/Documentation/DocBook/rta/Makefile
new file mode 100644
index 000000000000..58981e3ae3ef
--- /dev/null
+++ b/Documentation/DocBook/rta/Makefile
@@ -0,0 +1,5 @@
+RTA_OBJ_DIR=$(objtree)/Documentation/DocBook/
+RTA_SRC_DIR=$(srctree)/Documentation/DocBook/rta
+
+install_rta_images = \
+ $(Q)cp $(RTA_SRC_DIR)/*.svg $(RTA_OBJ_DIR)/rta_api
diff --git a/Documentation/DocBook/rta/rta_arch.svg b/Documentation/DocBook/rta/rta_arch.svg
new file mode 100644
index 000000000000..d816eed04852
--- /dev/null
+++ b/Documentation/DocBook/rta/rta_arch.svg
@@ -0,0 +1,381 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<!-- Created with Inkscape (http://www.inkscape.org/) -->
+
+<svg
+ xmlns:dc="http://purl.org/dc/elements/1.1/"
+ xmlns:cc="http://creativecommons.org/ns#"
+ xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
+ xmlns:svg="http://www.w3.org/2000/svg"
+ xmlns="http://www.w3.org/2000/svg"
+ xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
+ xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
+ width="644.09448819"
+ height="652.3622047"
+ id="svg2"
+ version="1.1"
+ inkscape:version="0.48.2 r9819"
+ sodipodi:docname="rta_arch.svg"
+ inkscape:export-filename="Z:\repos\sdk-devel\flib\sec\rta\doc\images\rta_arch.png"
+ inkscape:export-xdpi="90"
+ inkscape:export-ydpi="90">
+ <title
+ id="title3950">RTA Integration Overview</title>
+ <defs
+ id="defs4">
+ <marker
+ inkscape:stockid="Arrow2Lend"
+ orient="auto"
+ refY="0.0"
+ refX="0.0"
+ id="Arrow2Lend"
+ style="overflow:visible;">
+ <path
+ id="path4157"
+ style="font-size:12.0;fill-rule:evenodd;stroke-width:0.62500000;stroke-linejoin:round;"
+ d="M 8.7185878,4.0337352 L -2.2072895,0.016013256 L 8.7185884,-4.0017078 C 6.9730900,-1.6296469 6.9831476,1.6157441 8.7185878,4.0337352 z "
+ transform="scale(1.1) rotate(180) translate(1,0)" />
+ </marker>
+ <marker
+ inkscape:stockid="Arrow1Lend"
+ orient="auto"
+ refY="0.0"
+ refX="0.0"
+ id="Arrow1Lend"
+ style="overflow:visible;">
+ <path
+ id="path4139"
+ d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
+ style="fill-rule:evenodd;stroke:#000000;stroke-width:1.0pt;marker-start:none;"
+ transform="scale(0.8) rotate(180) translate(12.5,0)" />
+ </marker>
+ </defs>
+ <sodipodi:namedview
+ id="base"
+ pagecolor="#ffffff"
+ bordercolor="#666666"
+ borderopacity="1.0"
+ inkscape:pageopacity="0.0"
+ inkscape:pageshadow="2"
+ inkscape:zoom="0.98994949"
+ inkscape:cx="338.47626"
+ inkscape:cy="723.66809"
+ inkscape:document-units="px"
+ inkscape:current-layer="layer1"
+ showgrid="false"
+ inkscape:window-width="1440"
+ inkscape:window-height="878"
+ inkscape:window-x="-8"
+ inkscape:window-y="-8"
+ inkscape:window-maximized="1" />
+ <metadata
+ id="metadata7">
+ <rdf:RDF>
+ <cc:Work
+ rdf:about="">
+ <dc:format>image/svg+xml</dc:format>
+ <dc:type
+ rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
+ <dc:title>RTA Integration Overview</dc:title>
+ </cc:Work>
+ </rdf:RDF>
+ </metadata>
+ <g
+ inkscape:label="Layer 1"
+ inkscape:groupmode="layer"
+ id="layer1"
+ style="display:inline">
+ <rect
+ style="fill:#e5ffe5;fill-opacity:1;fill-rule:nonzero;stroke:#ff0000;stroke-width:0.94082779;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:0.9408278, 1.8816556;stroke-dashoffset:0"
+ id="rect2985"
+ width="533.80353"
+ height="200.01016"
+ x="82.832512"
+ y="49.280708"
+ ry="19.1929" />
+ <rect
+ style="fill:#99ffcc;fill-opacity:1;fill-rule:nonzero;stroke:#ff0000;stroke-width:1;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:1, 2;stroke-dashoffset:0"
+ id="rect3767"
+ width="101.01525"
+ height="53.538086"
+ x="243.44676"
+ y="73.524353"
+ ry="19.1929" />
+ <rect
+ style="fill:#99ffcc;fill-opacity:1;fill-rule:nonzero;stroke:#ff0000;stroke-width:0.81756771;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:0.81756773, 1.63513546;stroke-dashoffset:0"
+ id="rect3767-1"
+ width="101.01525"
+ height="35.785767"
+ x="243.44678"
+ y="159.89241"
+ ry="12.82886" />
+ <rect
+ style="fill:#ff66ff;fill-opacity:1;fill-rule:nonzero;stroke:#ff0000;stroke-width:0.81756771;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:0.81756773, 1.63513546;stroke-dashoffset:0"
+ id="rect3767-1-8"
+ width="101.01525"
+ height="35.785767"
+ x="490.93414"
+ y="81.895447"
+ ry="12.82886" />
+ <text
+ xml:space="preserve"
+ style="font-size:14px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
+ x="529.31989"
+ y="103.82895"
+ id="text3832"
+ sodipodi:linespacing="125%"><tspan
+ sodipodi:role="line"
+ id="tspan3834"
+ x="529.31989"
+ y="103.82895">RTA</tspan></text>
+ <rect
+ style="fill:#ffffcc;fill-opacity:1;fill-rule:nonzero;stroke:#ff0000;stroke-width:0.76365763;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:0.76365765, 1.5273153;stroke-dashoffset:0"
+ id="rect2985-5"
+ width="533.80353"
+ height="131.77383"
+ x="81.600868"
+ y="287.67673"
+ ry="12.644968" />
+ <rect
+ style="fill:#ff66ff;fill-opacity:1;fill-rule:nonzero;stroke:#ff0000;stroke-width:0.81756771;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:0.81756773, 1.63513546;stroke-dashoffset:0"
+ id="rect3767-1-8-1"
+ width="101.01525"
+ height="35.785767"
+ x="463.66003"
+ y="373.82953"
+ ry="12.82886" />
+ <text
+ xml:space="preserve"
+ style="font-size:14px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
+ x="500.61041"
+ y="395.72299"
+ id="text3832-5"
+ sodipodi:linespacing="125%"><tspan
+ sodipodi:role="line"
+ id="tspan3834-2"
+ x="500.61041"
+ y="395.72299">RTA</tspan></text>
+ <rect
+ style="fill:#ccecff;fill-opacity:1;fill-rule:nonzero;stroke:#ff0000;stroke-width:0.76365763;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:0.76365765, 1.5273153;stroke-dashoffset:0"
+ id="rect2985-5-7"
+ width="533.80353"
+ height="131.77383"
+ x="80.590714"
+ y="460.18579"
+ ry="12.644968" />
+ <rect
+ style="fill:#99ccff;fill-opacity:1;fill-rule:nonzero;stroke:#ff0000;stroke-width:0.30565068;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:0.30565068, 0.61130137;stroke-dashoffset:0"
+ id="rect2985-5-6"
+ width="203.08368"
+ height="55.48671"
+ x="248.03383"
+ y="519.5426"
+ ry="5.3244843" />
+ <flowRoot
+ xml:space="preserve"
+ id="flowRoot4061"
+ style="font-size:14px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"><flowRegion
+ id="flowRegion4063"><rect
+ id="rect4065"
+ width="45.456863"
+ height="17.172594"
+ x="139.40105"
+ y="685.67682" /></flowRegion><flowPara
+ id="flowPara4067" /></flowRoot> <path
+ style="fill:none;stroke:#000000;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none;stroke-dashoffset:0;marker-end:url(#Arrow2Lend)"
+ d="M 344.46201,100.19032 490.93414,99.891405"
+ id="path4131"
+ inkscape:connector-type="polyline"
+ inkscape:connector-curvature="0"
+ inkscape:connection-start="#rect3767"
+ inkscape:connection-start-point="d4"
+ inkscape:connection-end="#rect3767-1-8"
+ inkscape:connection-end-point="d4" />
+ <path
+ style="fill:none;stroke:#000000;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;marker-end:url(#Arrow2Lend)"
+ d="m 293.95439,127.06244 1e-5,32.82997"
+ id="path4763"
+ inkscape:connector-type="polyline"
+ inkscape:connector-curvature="0"
+ inkscape:connection-start="#rect3767"
+ inkscape:connection-start-point="d4"
+ inkscape:connection-end="#rect3767-1"
+ inkscape:connection-end-point="d4" />
+ <path
+ style="fill:none;stroke:#000000;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;marker-end:url(#Arrow2Lend)"
+ d="m 440.96319,335.95105 49.7186,37.87848"
+ id="path5135"
+ inkscape:connector-type="polyline"
+ inkscape:connector-curvature="0"
+ inkscape:connection-start="#rect4101"
+ inkscape:connection-start-point="d4"
+ inkscape:connection-end="#rect3767-1-8-1"
+ inkscape:connection-end-point="d4" />
+ <path
+ style="fill:none;stroke:#000000;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;marker-end:url(#Arrow2Lend)"
+ d="m 292.94424,193.73252 25.25381,338.4011"
+ id="path3067"
+ inkscape:connector-curvature="0" />
+ <path
+ style="fill:none;stroke:#000000;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;marker-end:url(#Arrow2Lend)"
+ d="m 212.13204,394.75287 94.95433,137.38075"
+ id="path3069"
+ inkscape:connector-curvature="0" />
+ <path
+ style="fill:none;stroke:#000000;stroke-width:1.004;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;marker-end:url(#Arrow2Lend);stroke-miterlimit:4;stroke-dasharray:none"
+ d="m 273.75134,378.59043 189.28009,13.13199"
+ id="path3071"
+ inkscape:connector-curvature="0" />
+ <text
+ xml:space="preserve"
+ style="font-size:14px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;display:inline;font-family:Sans"
+ x="103.62045"
+ y="71.464035"
+ id="text3832-1"
+ sodipodi:linespacing="125%"><tspan
+ sodipodi:role="line"
+ id="tspan3834-7"
+ x="103.62045"
+ y="71.464035">User space</tspan></text>
+ <text
+ xml:space="preserve"
+ style="font-size:14px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;display:inline;font-family:Sans"
+ x="99.680267"
+ y="313.51968"
+ id="text3832-1-4"
+ sodipodi:linespacing="125%"><tspan
+ sodipodi:role="line"
+ id="tspan3834-7-0"
+ x="99.680267"
+ y="313.51968">Kernel space</tspan></text>
+ <text
+ xml:space="preserve"
+ style="font-size:14px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;display:inline;font-family:Sans"
+ x="96.269417"
+ y="482.21518"
+ id="text3832-1-4-8"
+ sodipodi:linespacing="125%"><tspan
+ sodipodi:role="line"
+ id="tspan3834-7-0-8"
+ x="96.269417"
+ y="482.21518">Platform hardware</tspan></text>
+ <text
+ xml:space="preserve"
+ style="font-size:14px;font-style:normal;font-weight:normal;text-align:center;line-height:125%;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;display:inline;font-family:Sans"
+ x="294.0625"
+ y="94.316589"
+ id="text3832-1-2"
+ sodipodi:linespacing="125%"><tspan
+ sodipodi:role="line"
+ id="tspan3834-7-4"
+ x="294.0625"
+ y="94.316589">Crypto</tspan><tspan
+ sodipodi:role="line"
+ x="294.0625"
+ y="111.81659"
+ id="tspan3138">application</tspan></text>
+ <text
+ xml:space="preserve"
+ style="font-size:14px;font-style:normal;font-weight:normal;text-align:center;line-height:125%;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;display:inline;font-family:Sans"
+ x="295.19696"
+ y="182.62668"
+ id="text3832-1-2-5"
+ sodipodi:linespacing="125%"><tspan
+ sodipodi:role="line"
+ x="295.19696"
+ y="182.62668"
+ id="tspan3138-1">QBMAN</tspan></text>
+ </g>
+ <g
+ inkscape:groupmode="layer"
+ id="layer2"
+ inkscape:label="Layer 2"
+ style="display:inline">
+ <rect
+ style="fill:#ccecff;fill-opacity:1;fill-rule:nonzero;stroke:#ff0000;stroke-width:0.10832807;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:0.10832807, 0.21665614;stroke-dashoffset:0;display:inline"
+ id="rect2985-5-7-3"
+ width="46.55518"
+ height="30.403757"
+ x="292.39508"
+ y="532.58911"
+ ry="2.9175332" />
+ <text
+ xml:space="preserve"
+ style="font-size:14px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;display:inline;font-family:Sans"
+ x="308.09653"
+ y="552.33661"
+ id="text4015"
+ sodipodi:linespacing="125%"><tspan
+ sodipodi:role="line"
+ id="tspan4017"
+ x="308.09653"
+ y="552.33661">QI</tspan></text>
+ <rect
+ style="fill:#ccecff;fill-opacity:1;fill-rule:nonzero;stroke:#ff0000;stroke-width:0.10832807;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:0.10832807, 0.21665614;stroke-dashoffset:0;display:inline"
+ id="rect2985-5-7-3-2"
+ width="46.55518"
+ height="30.403757"
+ x="384.82404"
+ y="533.09424"
+ ry="2.9175332" />
+ <text
+ xml:space="preserve"
+ style="font-size:14px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;display:inline;font-family:Sans"
+ x="397.49506"
+ y="551.8316"
+ id="text4015-2"
+ sodipodi:linespacing="125%"><tspan
+ sodipodi:role="line"
+ id="tspan4017-1"
+ x="397.49506"
+ y="551.8316">JRI</tspan></text>
+ <text
+ xml:space="preserve"
+ style="font-size:14px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;display:inline;font-family:Sans"
+ x="254.55844"
+ y="535.16406"
+ id="text4069"
+ sodipodi:linespacing="125%"><tspan
+ sodipodi:role="line"
+ id="tspan4071"
+ x="254.55844"
+ y="535.16406">SEC</tspan></text>
+ <rect
+ style="fill:#ffcc00;fill-opacity:1;fill-rule:nonzero;stroke:#ff0000;stroke-width:0.80089962;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:0.8008996, 1.6017992;stroke-dashoffset:0"
+ id="rect4101"
+ width="112.12693"
+ height="31.101717"
+ x="348.50262"
+ y="304.84933"
+ ry="3.415338" />
+ <text
+ xml:space="preserve"
+ style="font-size:14px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
+ x="369.71585"
+ y="325.0524"
+ id="text4103"
+ sodipodi:linespacing="125%"><tspan
+ sodipodi:role="line"
+ id="tspan4105"
+ x="369.71585"
+ y="325.0524">SEC Driver</tspan></text>
+ <rect
+ style="fill:#ffcc00;fill-opacity:1;fill-rule:nonzero;stroke:#ff0000;stroke-width:0.80738008;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:0.80738011, 1.61476021;stroke-dashoffset:0;display:inline"
+ id="rect4101-5"
+ width="111.56696"
+ height="31.765713"
+ x="162.08232"
+ y="362.38086"
+ ry="3.4882529" />
+ <text
+ xml:space="preserve"
+ style="font-size:14px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;display:inline;font-family:Sans"
+ x="177.28172"
+ y="383.64117"
+ id="text4103-7"
+ sodipodi:linespacing="125%"><tspan
+ sodipodi:role="line"
+ id="tspan4105-6"
+ x="177.28172"
+ y="383.64117">SEC QI Driver</tspan></text>
+ </g>
+</svg>
--
1.8.3.1
Randy Dunlap
2014-08-19 20:54:50 UTC
Permalink
Post by Horia Geanta
Add SGML template for generating RTA docbook.
Source code is in drivers/crypto/caam/flib
---
Documentation/DocBook/Makefile | 3 +-
Documentation/DocBook/rta-api.tmpl | 261 ++++++++++++++++++++++
Documentation/DocBook/rta/.gitignore | 1 +
Documentation/DocBook/rta/Makefile | 5 +
Documentation/DocBook/rta/rta_arch.svg | 381 +++++++++++++++++++++++++++++++++
5 files changed, 650 insertions(+), 1 deletion(-)
create mode 100644 Documentation/DocBook/rta-api.tmpl
create mode 100644 Documentation/DocBook/rta/.gitignore
create mode 100644 Documentation/DocBook/rta/Makefile
create mode 100644 Documentation/DocBook/rta/rta_arch.svg
diff --git a/Documentation/DocBook/rta-api.tmpl b/Documentation/DocBook/rta-api.tmpl
new file mode 100644
index 000000000000..90c5c5a8d9a7
--- /dev/null
+++ b/Documentation/DocBook/rta-api.tmpl
@@ -0,0 +1,261 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!DOCTYPE book PUBLIC "-//OASIS//DTD DocBook XML V4.1.2//EN"
+ "http://www.oasis-open.org/docbook/xml/4.1.2/docbookx.dtd" []>
+
+<book id="RTAapi">
+ <bookinfo>
+ <title>Writing descriptors for Freescale CAAM using RTA library</title>
+ <authorgroup>
+ <author>
+ <firstname>Horia</firstname>
+ <surname>Geanta</surname>
+ <affiliation>
+ </affiliation>
+ </author>
+ </authorgroup>
+
+ <copyright>
+ <year>2008-2014</year>
+ <holder>Freescale Semiconductor</holder>
+ </copyright>
...
Post by Horia Geanta
+ <chapter id="intro">
+ <title>Introduction</title>
+ <sect1>
+ <title>About</title>
+!Pdrivers/crypto/caam/flib/rta.h About
+!Pdrivers/crypto/caam/flib/rta.h Usage
+ <mediaobject>
+ <imageobject>
+ <imagedata fileref="rta_arch.svg" format="SVG" align="CENTER"/>
+ </imageobject>
+ <caption><para>RTA Integration Overview</para></caption>
+ </mediaobject>
+ </sect1>
+ <sect1>
+ <title>Using RTA</title>
+ <para>
+ #include flib/rta.h
Needs quotation marks or angle brackets?
Post by Horia Geanta
+ </para>
+ <para>
+ The files in drivers/crypto/caam/desc directory contain several
+ real-world descriptors written with RTA. You can use them as-is or adapt
+ them to your needs.
+ </para>
+ <para>
+ RTA routines take as first parameter a pointer to a "struct program"
+ variable. It contains several housekeeping information that are used
drop: several & change: is
Post by Horia Geanta
+ during descriptor creation.
+ </para>
+ <para>
+ RTA creates the descriptors and saves them in buffers. It is the user's
+ job to allocate memory for these buffers before passing them to RTA
+ program initialization call.
+ </para>
+ <para>
+ A RTA program must start with a call to PROGRAM_CNTXT_INIT and end with
An
Post by Horia Geanta
+ PROGRAM_FINALIZE. PROGRAM_CNTXT_INIT will initialze the members of
initialize
Post by Horia Geanta
+ 'program' structure with user information (pointer to user's buffer, and
+ the SEC subversion). The PROGRAM_FINALIZE call checks the descriptor's
+ validity.
+ </para>
[snip]
--
~Randy
Kim Phillips
2014-08-16 11:16:55 UTC
Permalink
On Thu, 14 Aug 2014 15:54:22 +0300
Post by Horia Geanta
This patch set adds Run Time Assembler (RTA) SEC descriptor library.
RTA is a replacement for incumbent "inline append".
The library is intended to be a single code base for SEC descriptors creation
for all Freescale products. This comes with a series of advantages, such as
library being maintained / kept up-to-date with latest platforms, i.e. SEC
functionalities (for e.g. SEC incarnations present in Layerscape LS1 and LS2).
RTA detects options in SEC descriptors that are not supported
by a SEC HW revision ("Era") and reports this back.
Say a descriptor uses Sequence Out Pointer (SOP) option for the SEQINPTR
command, which is supported starting from SEC Era 5. If the descriptor would
be built on a P4080R3 platform (which has SEC Era 4), RTA would report
"SEQ IN PTR: Flag(s) not supported by SEC Era 4".
This is extremely useful and saves a lot of time wasted on debugging.
SEC HW detects only *some* of these problems, leaving user wonder what causes
a "DECO Watchdog Timeout". And when it prints something more useful, sometimes
it does not point to the exact opcode.
again, RTA just adds bloat to the kernel driver - the kernel driver
is supposed to generate the appropriate descriptor for its target
running SEC version no matter what, not "report back" what is/is not
supported. This is a flaw at the RTA design level, as far as the
kernel driver is concerned.

Thanks,

Kim
Horia Geantă
2014-09-03 09:59:34 UTC
Permalink
Post by Kim Phillips
On Thu, 14 Aug 2014 15:54:22 +0300
Post by Horia Geanta
This patch set adds Run Time Assembler (RTA) SEC descriptor library.
RTA is a replacement for incumbent "inline append".
The library is intended to be a single code base for SEC descriptors creation
for all Freescale products. This comes with a series of advantages, such as
library being maintained / kept up-to-date with latest platforms, i.e. SEC
functionalities (for e.g. SEC incarnations present in Layerscape LS1 and LS2).
RTA detects options in SEC descriptors that are not supported
by a SEC HW revision ("Era") and reports this back.
Say a descriptor uses Sequence Out Pointer (SOP) option for the SEQINPTR
command, which is supported starting from SEC Era 5. If the descriptor would
be built on a P4080R3 platform (which has SEC Era 4), RTA would report
"SEQ IN PTR: Flag(s) not supported by SEC Era 4".
This is extremely useful and saves a lot of time wasted on debugging.
SEC HW detects only *some* of these problems, leaving user wonder what causes
a "DECO Watchdog Timeout". And when it prints something more useful, sometimes
it does not point to the exact opcode.
again, RTA just adds bloat to the kernel driver - the kernel driver
is supposed to generate the appropriate descriptor for its target
running SEC version no matter what, not "report back" what is/is not
supported. This is a flaw at the RTA design level, as far as the
kernel driver is concerned.
What is your understanding of developing a descriptor?
First it needs to be written, then tested - within the kernel driver.
Having no error checking in the code that generates descriptors
increases testing / debugging time significantly. Again, SEC HW provides
some error reporting, but in many cases this is a clueless Watchdog Timeout.
SEC descriptors development is complex enough to deserve a few
indications along the way.

Regards,
Horia
Kim Phillips
2014-09-03 23:54:02 UTC
Permalink
On Wed, 3 Sep 2014 12:59:34 +0300
Post by Horia Geantă
Post by Kim Phillips
On Thu, 14 Aug 2014 15:54:22 +0300
=20
This patch set adds Run Time Assembler (RTA) SEC descriptor librar=
y.
Post by Horia Geantă
Post by Kim Phillips
RTA is a replacement for incumbent "inline append".
The library is intended to be a single code base for SEC descripto=
rs creation
Post by Horia Geantă
Post by Kim Phillips
for all Freescale products. This comes with a series of advantages=
, such as
Post by Horia Geantă
Post by Kim Phillips
library being maintained / kept up-to-date with latest platforms, =
i.e. SEC
Post by Horia Geantă
Post by Kim Phillips
functionalities (for e.g. SEC incarnations present in Layerscape L=
S1 and LS2).
Post by Horia Geantă
Post by Kim Phillips
RTA detects options in SEC descriptors that are not supported
by a SEC HW revision ("Era") and reports this back.
Say a descriptor uses Sequence Out Pointer (SOP) option for the SE=
QINPTR
Post by Horia Geantă
Post by Kim Phillips
command, which is supported starting from SEC Era 5. If the descri=
ptor would
Post by Horia Geantă
Post by Kim Phillips
be built on a P4080R3 platform (which has SEC Era 4), RTA would re=
port
Post by Horia Geantă
Post by Kim Phillips
"SEQ IN PTR: Flag(s) not supported by SEC Era 4".
This is extremely useful and saves a lot of time wasted on debuggi=
ng.
Post by Horia Geantă
Post by Kim Phillips
SEC HW detects only *some* of these problems, leaving user wonder =
what causes
Post by Horia Geantă
Post by Kim Phillips
a "DECO Watchdog Timeout". And when it prints something more usefu=
l, sometimes
Post by Horia Geantă
Post by Kim Phillips
it does not point to the exact opcode.
=20
again, RTA just adds bloat to the kernel driver - the kernel driver
is supposed to generate the appropriate descriptor for its target
running SEC version no matter what, not "report back" what is/is no=
t
Post by Horia Geantă
Post by Kim Phillips
supported. This is a flaw at the RTA design level, as far as the
kernel driver is concerned.
=20
What is your understanding of developing a descriptor?
First it needs to be written, then tested - within the kernel driver.
Having no error checking in the code that generates descriptors
increases testing / debugging time significantly. Again, SEC HW provi=
des
Post by Horia Geantă
some error reporting, but in many cases this is a clueless Watchdog T=
imeout.
Post by Horia Geantă
SEC descriptors development is complex enough to deserve a few
indications along the way.
AFAICT, RTA doesn't address the Watchdog Timeout issue (which
pretty much always means some part of the SEC has timed out waiting
for more input data). This is something learnt pretty quickly by
SEC developers, and something best to leave to the h/w anyway, since
it would be too cumbersome to add to descriptor construction. So
instead of using RTA, we can discuss enhancing error.c messages to
address cluing in users wrt what the error means, but that change
should probably start with the SEC documentation itself (which is
what error.c messages are based on, and what developers should be
referencing in the first place :).

So RTA tells you what command flags are supported in which SEC
versions. I'm pretty sure h/w behaviour covers that case, too, but
the premise doesn't apply to the kernel driver, since adding support
to the existing descriptors for a new feature flag implies knowing
which SEC version it was introduced in, because the kernel must
work on all versions of the SEC. This promotes a more constructive
implementation design, i.e., the upper levels of the descriptor
construction code shouldn't use the flag if the h/w doesn't support
it in the first place: none of this 'reporting back' business.

=46WIW, I doubt that out-of-kernel users would want this feature
either - it's bloat to them too, assuming they are working under the
same constraints as the kernel. If a user experiences the 'flag
unsupported' error, hypothetically they'd go back to the upper level
descriptor construction code and adjust it accordingly, rendering
the original check a waste of runtime after that point.

This reinforces my notion that RTA was not well thought-out from the
beginning. Probably a better place to do these basic checks is in
the form of an external disassembler.

In any case, most of the kernel's crypto API algorithm descriptors
are already fixed, written and constantly get backward-compatibility
tested on newer h/w via the kernel's testing infrastructure: RTA
adds nothing but bloat, and for that reason, I consider it a
regression, and therefore unacceptable for upstream inclusion.
Sorry.

Kim

Loading...