Coder Social home page Coder Social logo

Comments (4)

mratsim avatar mratsim commented on August 27, 2024

Some clarifications

As mentioned in Milagro docs, lazy carry and lazy reduction are separate, independant concepts.

Lazy reductions

Most prime modulus bitsize do not match exactly with a multiple of the logical word size of the library (2^3 or 2^63), for example the BN254 curve or BLS12-381 curve will leave respectively 63*5-254 = 61 and 63*7-381 = 60 unused bits in the last word. This can be used to accumulate up to 60~61 additions without worrying about overflowing in the physical representation.
And we would need to substract up to 60~61 * P to return to a fully reduced representation.

Not fully reducing

As an optimization we can stay in a unreduced representation by conditionally substracting half that range after that many substractions, taking advantage that it's unlikely that operands are in the same order of magnitude as the prime 2^254 or 2^381.

Fully reducing in logarithmic time

Or we can use a logarithmic approach by conditionally substracting half then a quarter then a eightth then ... of the excess range.
Given a word-bit size of 2^31 or 2^63, the excess bits are at most 30 or 62 and so we would need 4~5 conditional substractions every 30~62 additions.

As far as I know this is a novel approach that is not mentioned in the literature and was never used in software.

Dealing with substraction

Substraction of a reduced/unreduced operand by an unreduced operand must be handled specially:

  • either we follow Milagro path and we use negate + addition
  • or we follow Yanik or Schwabe path by using signed integers (or an emulation of)
  • or we follow the paper
    High-Speed Elliptic Curve Cryptographyon the NVIDIA GT200 Graphics Processing Unit
    Shujie Cui, Johann Großschadl, ZheLiu and Qiuliang Xu
    https://www.doc.ic.ac.uk/~scui2/papers/Cui2014_GPU.pdf
    which rely on the fact that substraction implemented as 4p + a - b will always be in the correct range when used in curve point addition and point doubling. Note that this may not hold for extension fields.
    edit: This is also mentioned in Michael Scott paper: Slothful reduction p6 and p8

Removing the need of final substraction

With lazy reduction we have a redundant representation that can represent 2p 3p 4p ... 8p if there are enough excess bits. This may help Montgomery Multiplication and exponentiation by avoiding the need of the last conditional substraction (Walter, 1999, Hachez & Quisquater, 2000, Bos & Montgomery, 2017 and many hardware papers like Ors-Batina-Prenel-Vandewalle or Walter, 2017)

Note that the Almost Montgomery Multiplication of Gueron that checks the MSB to know if reduction is needed will leak information and we would still need to estimate if we need to remove p, 2p, ..., MaxExcess * p after each substraction

Lazy Carry

Lazy Carries are (almost) independent from Lazy Reductions, here there are excess bits within a machine word.
This allows to replace

func add*(a: BigIntViewMut, b: BigIntViewAny): CTBool[Word] =
## Constant-time in-place addition
## Returns the carry
##
## a and b MAY be the same buffer
## a and b MUST have the same announced bitlength (i.e. `bits` static parameters)
checkMatchingBitlengths(a, b)
for i in 0 ..< a.numLimbs():
a[i] = a[i] + b[i] + Word(result)
result = a[i].isMsbSet()
a[i] = a[i].mask()

by

func add*(a: BigIntViewMut, b: BigIntViewAny) =
  checkMatchingBitlengths(a, b)

  for i in 0 ..< a.numLimbs():
    a[i] += b[i]
    a[i] = a[i].mask()

func modular_add(a: BigIntViewMut, b: BigIntViewAny, m: BigIntViewConst) =
  ## This is pseudocode and would be implemented at the Field-level and not BigInt level
  a.add(b)
  if a.potential_carries == MaxPotentialCarries:
    a.normalizeCarries()

Removing loop-carried dependencies and enabling easy vectorization by the compiler.
The only potential limitation is that the compiler cannot infer the loop unrolling, but the function becomes small enough that monomorphization+inlining would probably not increase codesize (separate functions require parameters and stack bookeeping before call and after call and also within the actual function call, this can easily require 10 to 20 instructions before doing anything useful, an inlined multilimb addition + mask on less than 20 limbs wouldn't increase codesize)

They also enable multiplication by constant less than the current word excess without converting them to Montgomery form and using full-blown Montgomery multiplication. This is useful for extension field that uses sqrt(-2) or sqrt(-5) as the solution to irreducible polynomials.

For normalization of lazy carries there is optional coupling with lazy reduction (hence the almost at the beginning of the paragraph) in that all the lazy carries must be accumulated somewhere, either they fit in the extra space of the last word WordBitSize - ModulusBitSize - Word Excess and we can handle this unreduced representation like in the lazy reduction case or we need an extra temporary word.

from constantine.

mratsim avatar mratsim commented on August 27, 2024

Beginning of a proof-of-concept here: mratsim/finite-fields@2673a04

Lazy carry

This is very easy to build for addition/substraction/negation as we can ensure that assuming u = modulus.limbs[i] we always have u <= 2^ExcessBits * modulus.limbs[i] in particular for u == 0. With lazy carries all addition/substraction/negation are carryless thanks to the extra space per word.

Full reduction is also straightforward

However with multiplication, limbs multiplication can carry over to the next one, can it also exceed u <= 2^ExcessBits * modulus.limbs[i]?
Dealing with those may introduce a lot of complexity.

Lazy reduction

Lazy reduction instead is probably much less error prone to implement since only the high word holds the excess carries. It however is quite restricted as we can only have 2 excess bits for BN254 (4*64=256) or 3 for BLS12-381 (6*64 = 384)

from constantine.

mratsim avatar mratsim commented on August 27, 2024

None of the pairing curves have a special form that make lazy reduction and logical word smaller than the physical word worth it.
Scott's paper Slothful reduction or the logarithmic reduction I mentioned makes debugging more complex and just delay the inevitable.

However, in some select cases we can use the technique in

  • High-Speed Software Implementation of the Optimal Ate Pairing over Barreto-Naehrig Curves
    Jean-Luc Beuchat and Jorge Enrique González Díaz and Shigeo Mitsunari and Eiji Okamoto and Francisco Rodríguez-Henríquez and Tadanori Teruya, 2010
    https://eprint.iacr.org/2010/354

image
image
to delay reduction, but making that constant time would still require a table lookup that scans all the potential n*p multiple of the field modulus p to select the appropriate one for reduction.

Furthermore, #17, adds no-carry Montgomery multiplication and squaring that is usable for most pairing-friendly curves (up to 15% perf improvement). Optimizing multiplications and squarings is probably more important than additions and forcing a reduction check before each is costly. Even if we use a redundant representation with 2 extra bits to allow storing up to 4p to allow for "final substraction-less" modular multiplication that may me paying for the substraction upfront.

from constantine.

mratsim avatar mratsim commented on August 27, 2024

For posterity, Patrick Longa's PhD Thesis dedicates 30 pages to lazy reduction and scheduling techniques to reduce or erase data dependencies

image

Followed by a dedicated section for pairings.
Notably we have these substraction formula to canonicalize a lazy reduced field element in case of double-precision arithmetic (i.e. using only 32-bit out of a 64-bit limb and leaving the extra 32 for carries)

image

This is followed by multiplication formula in FP2, Fp6 and Fp12 and squarings in Fp12 that uses the lazy reduction

https://uwspace.uwaterloo.ca/handle/10012/5857

However the space cost is high, especially in constant-time settings where you have to iterate over all the space all the time. Using more space also doesn't play well with CPU caches that are significantly more expensive than a data-dependency. Lastly since that time while the ADC instruction data dependency cost was 6 cycles, recent Intel CPUs only have a data-dependency cost of 2 cycles (but only 1 execution port AFAIK) provided you update a register (and not a memory location) and AMD is even less costly.

from constantine.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.