Struct crypto_bigint::UInt
source · [−]pub struct UInt<const LIMBS: usize> { /* private fields */ }
Expand description
Big unsigned integer.
Generic over the given number of LIMBS
Encoding support
This type supports many different types of encodings, either via the
Encoding
trait or various const fn
decoding and
encoding functions that can be used with UInt
constants.
Optional crate features for encoding (off-by-default):
generic-array
: enablesArrayEncoding
trait which can be used toUInt
asGenericArray<u8, N>
and aArrayDecoding
trait which can be used toGenericArray<u8, N>
asUInt
.rlp
: support for Recursive Length Prefix (RLP) encoding.
Implementations
sourceimpl<const LIMBS: usize> UInt<LIMBS>
impl<const LIMBS: usize> UInt<LIMBS>
sourcepub const fn adc(&self, rhs: &Self, carry: Limb) -> (Self, Limb)
pub const fn adc(&self, rhs: &Self, carry: Limb) -> (Self, Limb)
Computes a + b + carry
, returning the result along with the new carry.
sourcepub const fn wrapping_add(&self, rhs: &Self) -> Self
pub const fn wrapping_add(&self, rhs: &Self) -> Self
Perform wrapping addition, discarding overflow.
sourcepub fn checked_add(&self, rhs: &Self) -> CtOption<Self>
pub fn checked_add(&self, rhs: &Self) -> CtOption<Self>
Perform checked addition, returning a CtOption
which is_some
only
if the operation did not overflow.
sourceimpl<const LIMBS: usize> UInt<LIMBS>
impl<const LIMBS: usize> UInt<LIMBS>
sourcepub const fn wrapping_and(&self, rhs: &Self) -> Self
pub const fn wrapping_and(&self, rhs: &Self) -> Self
Perform wrapping bitwise and. There’s no way wrapping could ever happen. This function exists so that all operations are accounted for in the wrapping operations
sourcepub fn checked_and(&self, rhs: &Self) -> CtOption<Self>
pub fn checked_and(&self, rhs: &Self) -> CtOption<Self>
Perform checked bitwise and, returning a CtOption
which is_some
always
sourceimpl<const LIMBS: usize> UInt<LIMBS>
impl<const LIMBS: usize> UInt<LIMBS>
sourcepub fn div_rem(&self, rhs: &Self) -> CtOption<(Self, Self)>
pub fn div_rem(&self, rhs: &Self) -> CtOption<(Self, Self)>
Computes self / rhs, returns the quotient, remainder if rhs != 0
sourcepub fn reduce(&self, rhs: &Self) -> CtOption<Self>
pub fn reduce(&self, rhs: &Self) -> CtOption<Self>
Computes self % rhs, returns the remainder if rhs != 0
sourcepub const fn wrapping_div(&self, rhs: &Self) -> Self
pub const fn wrapping_div(&self, rhs: &Self) -> Self
Wrapped division is just normal division i.e. self
/ rhs
There’s no way wrapping could ever happen.
This function exists, so that all operations are accounted for in the wrapping operations.
sourcepub fn checked_div(&self, rhs: &Self) -> CtOption<Self>
pub fn checked_div(&self, rhs: &Self) -> CtOption<Self>
Perform checked division, returning a CtOption
which is_some
only if the rhs != 0
sourcepub const fn wrapping_rem(&self, rhs: &Self) -> Self
pub const fn wrapping_rem(&self, rhs: &Self) -> Self
Wrapped (modular) remainder calculation is just self
% rhs
.
There’s no way wrapping could ever happen.
This function exists, so that all operations are accounted for in the wrapping operations.
sourcepub fn checked_rem(&self, rhs: &Self) -> CtOption<Self>
pub fn checked_rem(&self, rhs: &Self) -> CtOption<Self>
Perform checked reduction, returning a CtOption
which is_some
only if the rhs != 0
sourceimpl<const LIMBS: usize> UInt<LIMBS>
impl<const LIMBS: usize> UInt<LIMBS>
sourcepub const fn from_be_slice(bytes: &[u8]) -> Self
pub const fn from_be_slice(bytes: &[u8]) -> Self
Create a new UInt
from the provided big endian bytes.
sourcepub const fn from_be_hex(hex: &str) -> Self
pub const fn from_be_hex(hex: &str) -> Self
Create a new UInt
from the provided big endian hex string.
sourcepub const fn from_le_slice(bytes: &[u8]) -> Self
pub const fn from_le_slice(bytes: &[u8]) -> Self
Create a new UInt
from the provided little endian bytes.
sourcepub const fn from_le_hex(hex: &str) -> Self
pub const fn from_le_hex(hex: &str) -> Self
Create a new UInt
from the provided little endian hex string.
sourceimpl<const LIMBS: usize> UInt<LIMBS>
impl<const LIMBS: usize> UInt<LIMBS>
sourcepub const fn from_uint_array(arr: [Inner; LIMBS]) -> Self
pub const fn from_uint_array(arr: [Inner; LIMBS]) -> Self
Create a UInt
from an array of the limb::Inner
unsigned integer type.
sourcepub const fn to_uint_array(self) -> [Inner; LIMBS]
pub const fn to_uint_array(self) -> [Inner; LIMBS]
Create an array of limb::Inner
unsigned integer type from a UInt
.
sourceimpl<const LIMBS: usize> UInt<LIMBS>
impl<const LIMBS: usize> UInt<LIMBS>
sourcepub const fn mul_wide(&self, rhs: &Self) -> (Self, Self)
pub const fn mul_wide(&self, rhs: &Self) -> (Self, Self)
Compute “wide” multiplication, with a product twice the size of the input.
sourcepub const fn wrapping_mul(&self, rhs: &Self) -> Self
pub const fn wrapping_mul(&self, rhs: &Self) -> Self
Perform wrapping multiplication, discarding overflow.
sourcepub fn checked_mul(&self, rhs: &Self) -> CtOption<Self>
pub fn checked_mul(&self, rhs: &Self) -> CtOption<Self>
Perform checked multiplication, returning a CtOption
which is_some
only if the operation did not overflow.
sourceimpl<const LIMBS: usize> UInt<LIMBS>
impl<const LIMBS: usize> UInt<LIMBS>
sourcepub const fn wrapping_or(&self, rhs: &Self) -> Self
pub const fn wrapping_or(&self, rhs: &Self) -> Self
Perform wrapping bitwise or. There’s no way wrapping could ever happen. This function exists so that all operations are accounted for in the wrapping operations
sourcepub fn checked_or(&self, rhs: &Self) -> CtOption<Self>
pub fn checked_or(&self, rhs: &Self) -> CtOption<Self>
Perform checked bitwise or, returning a CtOption
which is_some
always
sourceimpl<const LIMBS: usize> UInt<LIMBS>
impl<const LIMBS: usize> UInt<LIMBS>
sourcepub const fn shl_vartime(&self, n: usize) -> Self
pub const fn shl_vartime(&self, n: usize) -> Self
Computes self << shift
.
NOTE: this operation is variable time with respect to n
ONLY.
When used with a fixed n
, this function is constant-time with respect
to self
.
sourceimpl<const LIMBS: usize> UInt<LIMBS>
impl<const LIMBS: usize> UInt<LIMBS>
sourcepub const fn shr_vartime(&self, shift: usize) -> Self
pub const fn shr_vartime(&self, shift: usize) -> Self
Computes self >> n
.
NOTE: this operation is variable time with respect to n
ONLY.
When used with a fixed n
, this function is constant-time with respect
to self
.
sourceimpl<const LIMBS: usize> UInt<LIMBS>
impl<const LIMBS: usize> UInt<LIMBS>
sourcepub const fn sqrt(&self) -> Self
pub const fn sqrt(&self) -> Self
Computes √(self
)
Uses Brent & Zimmermann, Modern Computer Arithmetic, v0.5.9, Algorithm 1.13
Callers can check if self
is a square by squaring the result
sourcepub const fn wrapping_sqrt(&self) -> Self
pub const fn wrapping_sqrt(&self) -> Self
Wrapped sqrt is just normal √(self
)
There’s no way wrapping could ever happen.
This function exists, so that all operations are accounted for in the wrapping operations.
sourcepub fn checked_sqrt(&self) -> CtOption<Self>
pub fn checked_sqrt(&self) -> CtOption<Self>
Perform checked sqrt, returning a CtOption
which is_some
only if the √(self
)² == self
sourceimpl<const LIMBS: usize> UInt<LIMBS>
impl<const LIMBS: usize> UInt<LIMBS>
sourcepub const fn sbb(&self, rhs: &Self, borrow: Limb) -> (Self, Limb)
pub const fn sbb(&self, rhs: &Self, borrow: Limb) -> (Self, Limb)
Computes a - (b + borrow)
, returning the result along with the new borrow.
sourcepub const fn wrapping_sub(&self, rhs: &Self) -> Self
pub const fn wrapping_sub(&self, rhs: &Self) -> Self
Perform wrapping subtraction, discarding underflow and wrapping around the boundary of the type.
sourcepub fn checked_sub(&self, rhs: &Self) -> CtOption<Self>
pub fn checked_sub(&self, rhs: &Self) -> CtOption<Self>
Perform checked subtraction, returning a CtOption
which is_some
only if the operation did not overflow.
sourceimpl<const LIMBS: usize> UInt<LIMBS>
impl<const LIMBS: usize> UInt<LIMBS>
sourcepub fn random(rng: impl CryptoRng + RngCore) -> Self
pub fn random(rng: impl CryptoRng + RngCore) -> Self
Generate a cryptographically secure random UInt
.
sourcepub fn random_mod(rng: impl CryptoRng + RngCore, modulus: &Self) -> Self
pub fn random_mod(rng: impl CryptoRng + RngCore, modulus: &Self) -> Self
Generate a cryptographically secure random UInt
which is less than
a given modulus
.
This function uses rejection sampling, a method which produces an
unbiased distribution of in-range values provided the underlying
CryptoRng
is unbiased, but runs in variable-time.
The variable-time nature of the algorithm should not pose a security
issue so long as the underlying random number generator is truly a
CryptoRng
, where previous outputs are unrelated to subsequent
outputs and do not reveal information about the RNG’s internal state.
Trait Implementations
sourceimpl<const LIMBS: usize> ConditionallySelectable for UInt<LIMBS>
impl<const LIMBS: usize> ConditionallySelectable for UInt<LIMBS>
sourceimpl<const LIMBS: usize> ConstantTimeEq for UInt<LIMBS>
impl<const LIMBS: usize> ConstantTimeEq for UInt<LIMBS>
sourceimpl<const LIMBS: usize> ConstantTimeGreater for UInt<LIMBS>
impl<const LIMBS: usize> ConstantTimeGreater for UInt<LIMBS>
sourceimpl<const LIMBS: usize> ConstantTimeLess for UInt<LIMBS>
impl<const LIMBS: usize> ConstantTimeLess for UInt<LIMBS>
sourceimpl<const LIMBS: usize> Div<&'_ NonZero<UInt<LIMBS>>> for &UInt<LIMBS> where
UInt<LIMBS>: Integer,
impl<const LIMBS: usize> Div<&'_ NonZero<UInt<LIMBS>>> for &UInt<LIMBS> where
UInt<LIMBS>: Integer,
sourceimpl<const LIMBS: usize> Div<&'_ NonZero<UInt<LIMBS>>> for UInt<LIMBS> where
UInt<LIMBS>: Integer,
impl<const LIMBS: usize> Div<&'_ NonZero<UInt<LIMBS>>> for UInt<LIMBS> where
UInt<LIMBS>: Integer,
sourceimpl<const LIMBS: usize> Div<NonZero<UInt<LIMBS>>> for &UInt<LIMBS> where
UInt<LIMBS>: Integer,
impl<const LIMBS: usize> Div<NonZero<UInt<LIMBS>>> for &UInt<LIMBS> where
UInt<LIMBS>: Integer,
sourceimpl<const LIMBS: usize> Div<NonZero<UInt<LIMBS>>> for UInt<LIMBS> where
UInt<LIMBS>: Integer,
impl<const LIMBS: usize> Div<NonZero<UInt<LIMBS>>> for UInt<LIMBS> where
UInt<LIMBS>: Integer,
sourceimpl<const LIMBS: usize> DivAssign<&'_ NonZero<UInt<LIMBS>>> for UInt<LIMBS> where
UInt<LIMBS>: Integer,
impl<const LIMBS: usize> DivAssign<&'_ NonZero<UInt<LIMBS>>> for UInt<LIMBS> where
UInt<LIMBS>: Integer,
sourcefn div_assign(&mut self, rhs: &NonZero<UInt<LIMBS>>)
fn div_assign(&mut self, rhs: &NonZero<UInt<LIMBS>>)
Performs the /=
operation. Read more
sourceimpl<const LIMBS: usize> DivAssign<NonZero<UInt<LIMBS>>> for UInt<LIMBS> where
UInt<LIMBS>: Integer,
impl<const LIMBS: usize> DivAssign<NonZero<UInt<LIMBS>>> for UInt<LIMBS> where
UInt<LIMBS>: Integer,
sourcefn div_assign(&mut self, rhs: NonZero<UInt<LIMBS>>)
fn div_assign(&mut self, rhs: NonZero<UInt<LIMBS>>)
Performs the /=
operation. Read more
sourceimpl From<(UInt<{nlimbs!($bits)}>, UInt<{nlimbs!($bits)}>)> for UInt<{ 64 / crate::limb::BIT_SIZE * 2 }>
impl From<(UInt<{nlimbs!($bits)}>, UInt<{nlimbs!($bits)}>)> for UInt<{ 64 / crate::limb::BIT_SIZE * 2 }>
sourceimpl From<(UInt<{nlimbs!($bits)}>, UInt<{nlimbs!($bits)}>)> for UInt<{ 128 / crate::limb::BIT_SIZE * 2 }>
impl From<(UInt<{nlimbs!($bits)}>, UInt<{nlimbs!($bits)}>)> for UInt<{ 128 / crate::limb::BIT_SIZE * 2 }>
sourceimpl From<(UInt<{nlimbs!($bits)}>, UInt<{nlimbs!($bits)}>)> for UInt<{ 1536 / crate::limb::BIT_SIZE * 2 }>
impl From<(UInt<{nlimbs!($bits)}>, UInt<{nlimbs!($bits)}>)> for UInt<{ 1536 / crate::limb::BIT_SIZE * 2 }>
sourceimpl From<(UInt<{nlimbs!($bits)}>, UInt<{nlimbs!($bits)}>)> for UInt<{ 1792 / crate::limb::BIT_SIZE * 2 }>
impl From<(UInt<{nlimbs!($bits)}>, UInt<{nlimbs!($bits)}>)> for UInt<{ 1792 / crate::limb::BIT_SIZE * 2 }>
sourceimpl From<(UInt<{nlimbs!($bits)}>, UInt<{nlimbs!($bits)}>)> for UInt<{ 2048 / crate::limb::BIT_SIZE * 2 }>
impl From<(UInt<{nlimbs!($bits)}>, UInt<{nlimbs!($bits)}>)> for UInt<{ 2048 / crate::limb::BIT_SIZE * 2 }>
sourceimpl From<(UInt<{nlimbs!($bits)}>, UInt<{nlimbs!($bits)}>)> for UInt<{ 3072 / crate::limb::BIT_SIZE * 2 }>
impl From<(UInt<{nlimbs!($bits)}>, UInt<{nlimbs!($bits)}>)> for UInt<{ 3072 / crate::limb::BIT_SIZE * 2 }>
sourceimpl From<(UInt<{nlimbs!($bits)}>, UInt<{nlimbs!($bits)}>)> for UInt<{ 4096 / crate::limb::BIT_SIZE * 2 }>
impl From<(UInt<{nlimbs!($bits)}>, UInt<{nlimbs!($bits)}>)> for UInt<{ 4096 / crate::limb::BIT_SIZE * 2 }>
sourceimpl From<(UInt<{nlimbs!($bits)}>, UInt<{nlimbs!($bits)}>)> for UInt<{ 192 / crate::limb::BIT_SIZE * 2 }>
impl From<(UInt<{nlimbs!($bits)}>, UInt<{nlimbs!($bits)}>)> for UInt<{ 192 / crate::limb::BIT_SIZE * 2 }>
sourceimpl From<(UInt<{nlimbs!($bits)}>, UInt<{nlimbs!($bits)}>)> for UInt<{ 256 / crate::limb::BIT_SIZE * 2 }>
impl From<(UInt<{nlimbs!($bits)}>, UInt<{nlimbs!($bits)}>)> for UInt<{ 256 / crate::limb::BIT_SIZE * 2 }>
sourceimpl From<(UInt<{nlimbs!($bits)}>, UInt<{nlimbs!($bits)}>)> for UInt<{ 384 / crate::limb::BIT_SIZE * 2 }>
impl From<(UInt<{nlimbs!($bits)}>, UInt<{nlimbs!($bits)}>)> for UInt<{ 384 / crate::limb::BIT_SIZE * 2 }>
sourceimpl From<(UInt<{nlimbs!($bits)}>, UInt<{nlimbs!($bits)}>)> for UInt<{ 448 / crate::limb::BIT_SIZE * 2 }>
impl From<(UInt<{nlimbs!($bits)}>, UInt<{nlimbs!($bits)}>)> for UInt<{ 448 / crate::limb::BIT_SIZE * 2 }>
sourceimpl From<(UInt<{nlimbs!($bits)}>, UInt<{nlimbs!($bits)}>)> for UInt<{ 512 / crate::limb::BIT_SIZE * 2 }>
impl From<(UInt<{nlimbs!($bits)}>, UInt<{nlimbs!($bits)}>)> for UInt<{ 512 / crate::limb::BIT_SIZE * 2 }>
sourceimpl From<(UInt<{nlimbs!($bits)}>, UInt<{nlimbs!($bits)}>)> for UInt<{ 768 / crate::limb::BIT_SIZE * 2 }>
impl From<(UInt<{nlimbs!($bits)}>, UInt<{nlimbs!($bits)}>)> for UInt<{ 768 / crate::limb::BIT_SIZE * 2 }>
sourceimpl From<(UInt<{nlimbs!($bits)}>, UInt<{nlimbs!($bits)}>)> for UInt<{ 896 / crate::limb::BIT_SIZE * 2 }>
impl From<(UInt<{nlimbs!($bits)}>, UInt<{nlimbs!($bits)}>)> for UInt<{ 896 / crate::limb::BIT_SIZE * 2 }>
sourceimpl From<(UInt<{nlimbs!($bits)}>, UInt<{nlimbs!($bits)}>)> for UInt<{ 1024 / crate::limb::BIT_SIZE * 2 }>
impl From<(UInt<{nlimbs!($bits)}>, UInt<{nlimbs!($bits)}>)> for UInt<{ 1024 / crate::limb::BIT_SIZE * 2 }>
sourceimpl From<UInt<{nlimbs!($bits)}>> for (UInt<{ 1024 / crate::limb::BIT_SIZE / 2 }>, UInt<{ 1024 / crate::limb::BIT_SIZE / 2 }>)
impl From<UInt<{nlimbs!($bits)}>> for (UInt<{ 1024 / crate::limb::BIT_SIZE / 2 }>, UInt<{ 1024 / crate::limb::BIT_SIZE / 2 }>)
sourceimpl From<UInt<{nlimbs!($bits)}>> for (UInt<{ 1536 / crate::limb::BIT_SIZE / 2 }>, UInt<{ 1536 / crate::limb::BIT_SIZE / 2 }>)
impl From<UInt<{nlimbs!($bits)}>> for (UInt<{ 1536 / crate::limb::BIT_SIZE / 2 }>, UInt<{ 1536 / crate::limb::BIT_SIZE / 2 }>)
sourceimpl From<UInt<{nlimbs!($bits)}>> for (UInt<{ 1792 / crate::limb::BIT_SIZE / 2 }>, UInt<{ 1792 / crate::limb::BIT_SIZE / 2 }>)
impl From<UInt<{nlimbs!($bits)}>> for (UInt<{ 1792 / crate::limb::BIT_SIZE / 2 }>, UInt<{ 1792 / crate::limb::BIT_SIZE / 2 }>)
sourceimpl From<UInt<{nlimbs!($bits)}>> for (UInt<{ 2048 / crate::limb::BIT_SIZE / 2 }>, UInt<{ 2048 / crate::limb::BIT_SIZE / 2 }>)
impl From<UInt<{nlimbs!($bits)}>> for (UInt<{ 2048 / crate::limb::BIT_SIZE / 2 }>, UInt<{ 2048 / crate::limb::BIT_SIZE / 2 }>)
sourceimpl From<UInt<{nlimbs!($bits)}>> for (UInt<{ 3072 / crate::limb::BIT_SIZE / 2 }>, UInt<{ 3072 / crate::limb::BIT_SIZE / 2 }>)
impl From<UInt<{nlimbs!($bits)}>> for (UInt<{ 3072 / crate::limb::BIT_SIZE / 2 }>, UInt<{ 3072 / crate::limb::BIT_SIZE / 2 }>)
sourceimpl From<UInt<{nlimbs!($bits)}>> for (UInt<{ 3584 / crate::limb::BIT_SIZE / 2 }>, UInt<{ 3584 / crate::limb::BIT_SIZE / 2 }>)
impl From<UInt<{nlimbs!($bits)}>> for (UInt<{ 3584 / crate::limb::BIT_SIZE / 2 }>, UInt<{ 3584 / crate::limb::BIT_SIZE / 2 }>)
sourceimpl From<UInt<{nlimbs!($bits)}>> for (UInt<{ 4096 / crate::limb::BIT_SIZE / 2 }>, UInt<{ 4096 / crate::limb::BIT_SIZE / 2 }>)
impl From<UInt<{nlimbs!($bits)}>> for (UInt<{ 4096 / crate::limb::BIT_SIZE / 2 }>, UInt<{ 4096 / crate::limb::BIT_SIZE / 2 }>)
sourceimpl From<UInt<{nlimbs!($bits)}>> for (UInt<{ 6144 / crate::limb::BIT_SIZE / 2 }>, UInt<{ 6144 / crate::limb::BIT_SIZE / 2 }>)
impl From<UInt<{nlimbs!($bits)}>> for (UInt<{ 6144 / crate::limb::BIT_SIZE / 2 }>, UInt<{ 6144 / crate::limb::BIT_SIZE / 2 }>)
sourceimpl From<UInt<{nlimbs!($bits)}>> for (UInt<{ 8192 / crate::limb::BIT_SIZE / 2 }>, UInt<{ 8192 / crate::limb::BIT_SIZE / 2 }>)
impl From<UInt<{nlimbs!($bits)}>> for (UInt<{ 8192 / crate::limb::BIT_SIZE / 2 }>, UInt<{ 8192 / crate::limb::BIT_SIZE / 2 }>)
sourceimpl From<UInt<{nlimbs!($bits)}>> for (UInt<{ 128 / crate::limb::BIT_SIZE / 2 }>, UInt<{ 128 / crate::limb::BIT_SIZE / 2 }>)
impl From<UInt<{nlimbs!($bits)}>> for (UInt<{ 128 / crate::limb::BIT_SIZE / 2 }>, UInt<{ 128 / crate::limb::BIT_SIZE / 2 }>)
sourceimpl From<UInt<{nlimbs!($bits)}>> for (UInt<{ 192 / crate::limb::BIT_SIZE / 2 }>, UInt<{ 192 / crate::limb::BIT_SIZE / 2 }>)
impl From<UInt<{nlimbs!($bits)}>> for (UInt<{ 192 / crate::limb::BIT_SIZE / 2 }>, UInt<{ 192 / crate::limb::BIT_SIZE / 2 }>)
sourceimpl From<UInt<{nlimbs!($bits)}>> for (UInt<{ 256 / crate::limb::BIT_SIZE / 2 }>, UInt<{ 256 / crate::limb::BIT_SIZE / 2 }>)
impl From<UInt<{nlimbs!($bits)}>> for (UInt<{ 256 / crate::limb::BIT_SIZE / 2 }>, UInt<{ 256 / crate::limb::BIT_SIZE / 2 }>)
sourceimpl From<UInt<{nlimbs!($bits)}>> for (UInt<{ 384 / crate::limb::BIT_SIZE / 2 }>, UInt<{ 384 / crate::limb::BIT_SIZE / 2 }>)
impl From<UInt<{nlimbs!($bits)}>> for (UInt<{ 384 / crate::limb::BIT_SIZE / 2 }>, UInt<{ 384 / crate::limb::BIT_SIZE / 2 }>)
sourceimpl From<UInt<{nlimbs!($bits)}>> for (UInt<{ 448 / crate::limb::BIT_SIZE / 2 }>, UInt<{ 448 / crate::limb::BIT_SIZE / 2 }>)
impl From<UInt<{nlimbs!($bits)}>> for (UInt<{ 448 / crate::limb::BIT_SIZE / 2 }>, UInt<{ 448 / crate::limb::BIT_SIZE / 2 }>)
sourceimpl From<UInt<{nlimbs!($bits)}>> for (UInt<{ 512 / crate::limb::BIT_SIZE / 2 }>, UInt<{ 512 / crate::limb::BIT_SIZE / 2 }>)
impl From<UInt<{nlimbs!($bits)}>> for (UInt<{ 512 / crate::limb::BIT_SIZE / 2 }>, UInt<{ 512 / crate::limb::BIT_SIZE / 2 }>)
sourceimpl From<UInt<{nlimbs!($bits)}>> for (UInt<{ 768 / crate::limb::BIT_SIZE / 2 }>, UInt<{ 768 / crate::limb::BIT_SIZE / 2 }>)
impl From<UInt<{nlimbs!($bits)}>> for (UInt<{ 768 / crate::limb::BIT_SIZE / 2 }>, UInt<{ 768 / crate::limb::BIT_SIZE / 2 }>)
sourceimpl From<UInt<{nlimbs!($bits)}>> for (UInt<{ 896 / crate::limb::BIT_SIZE / 2 }>, UInt<{ 896 / crate::limb::BIT_SIZE / 2 }>)
impl From<UInt<{nlimbs!($bits)}>> for (UInt<{ 896 / crate::limb::BIT_SIZE / 2 }>, UInt<{ 896 / crate::limb::BIT_SIZE / 2 }>)
sourceimpl<const LIMBS: usize> Ord for UInt<LIMBS>
impl<const LIMBS: usize> Ord for UInt<LIMBS>
sourceimpl<const LIMBS: usize> PartialOrd<UInt<LIMBS>> for UInt<LIMBS>
impl<const LIMBS: usize> PartialOrd<UInt<LIMBS>> for UInt<LIMBS>
sourcefn partial_cmp(&self, other: &Self) -> Option<Ordering>
fn partial_cmp(&self, other: &Self) -> Option<Ordering>
This method returns an ordering between self
and other
values if one exists. Read more
1.0.0 · sourcefn lt(&self, other: &Rhs) -> bool
fn lt(&self, other: &Rhs) -> bool
This method tests less than (for self
and other
) and is used by the <
operator. Read more
1.0.0 · sourcefn le(&self, other: &Rhs) -> bool
fn le(&self, other: &Rhs) -> bool
This method tests less than or equal to (for self
and other
) and is used by the <=
operator. Read more
sourceimpl<const LIMBS: usize> Rem<&'_ NonZero<UInt<LIMBS>>> for &UInt<LIMBS> where
UInt<LIMBS>: Integer,
impl<const LIMBS: usize> Rem<&'_ NonZero<UInt<LIMBS>>> for &UInt<LIMBS> where
UInt<LIMBS>: Integer,
sourceimpl<const LIMBS: usize> Rem<&'_ NonZero<UInt<LIMBS>>> for UInt<LIMBS> where
UInt<LIMBS>: Integer,
impl<const LIMBS: usize> Rem<&'_ NonZero<UInt<LIMBS>>> for UInt<LIMBS> where
UInt<LIMBS>: Integer,
sourceimpl<const LIMBS: usize> Rem<NonZero<UInt<LIMBS>>> for &UInt<LIMBS> where
UInt<LIMBS>: Integer,
impl<const LIMBS: usize> Rem<NonZero<UInt<LIMBS>>> for &UInt<LIMBS> where
UInt<LIMBS>: Integer,
sourceimpl<const LIMBS: usize> Rem<NonZero<UInt<LIMBS>>> for UInt<LIMBS> where
UInt<LIMBS>: Integer,
impl<const LIMBS: usize> Rem<NonZero<UInt<LIMBS>>> for UInt<LIMBS> where
UInt<LIMBS>: Integer,
sourceimpl<const LIMBS: usize> RemAssign<&'_ NonZero<UInt<LIMBS>>> for UInt<LIMBS> where
UInt<LIMBS>: Integer,
impl<const LIMBS: usize> RemAssign<&'_ NonZero<UInt<LIMBS>>> for UInt<LIMBS> where
UInt<LIMBS>: Integer,
sourcefn rem_assign(&mut self, rhs: &NonZero<UInt<LIMBS>>)
fn rem_assign(&mut self, rhs: &NonZero<UInt<LIMBS>>)
Performs the %=
operation. Read more
sourceimpl<const LIMBS: usize> RemAssign<NonZero<UInt<LIMBS>>> for UInt<LIMBS> where
UInt<LIMBS>: Integer,
impl<const LIMBS: usize> RemAssign<NonZero<UInt<LIMBS>>> for UInt<LIMBS> where
UInt<LIMBS>: Integer,
sourcefn rem_assign(&mut self, rhs: NonZero<UInt<LIMBS>>)
fn rem_assign(&mut self, rhs: NonZero<UInt<LIMBS>>)
Performs the %=
operation. Read more
sourceimpl<const LIMBS: usize> ShlAssign<usize> for UInt<LIMBS>
impl<const LIMBS: usize> ShlAssign<usize> for UInt<LIMBS>
sourcefn shl_assign(&mut self, rhs: usize)
fn shl_assign(&mut self, rhs: usize)
NOTE: this operation is variable time with respect to rhs
ONLY.
When used with a fixed rhs
, this function is constant-time with respect
to self
.
sourceimpl<const LIMBS: usize> ShrAssign<usize> for UInt<LIMBS>
impl<const LIMBS: usize> ShrAssign<usize> for UInt<LIMBS>
sourcefn shr_assign(&mut self, rhs: usize)
fn shr_assign(&mut self, rhs: usize)
Performs the >>=
operation. Read more
impl<const LIMBS: usize> Copy for UInt<LIMBS>
impl<const LIMBS: usize> Eq for UInt<LIMBS>
Auto Trait Implementations
impl<const LIMBS: usize> RefUnwindSafe for UInt<LIMBS>
impl<const LIMBS: usize> Send for UInt<LIMBS>
impl<const LIMBS: usize> Sync for UInt<LIMBS>
impl<const LIMBS: usize> Unpin for UInt<LIMBS>
impl<const LIMBS: usize> UnwindSafe for UInt<LIMBS>
Blanket Implementations
sourceimpl<T> BorrowMut<T> for T where
T: ?Sized,
impl<T> BorrowMut<T> for T where
T: ?Sized,
const: unstable · sourcefn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value. Read more
sourceimpl<T> ToOwned for T where
T: Clone,
impl<T> ToOwned for T where
T: Clone,
type Owned = T
type Owned = T
The resulting type after obtaining ownership.
sourcefn clone_into(&self, target: &mut T)
fn clone_into(&self, target: &mut T)
toowned_clone_into
)Uses borrowed data to replace owned data, usually by cloning. Read more