Arbitrary precision
In computation, arbitrary precision or bignum (by big number, in English) is a method that allows the representation, in a computer program, of integers or numbers with as many digits of precision as it is desired and also makes it possible to perform arithmetic operations on these numbers.
Numbers are usually stored as digit arrays using the binary basis or other basis for the representation. Unlike data types implemented in hardware (of a fixed length determined for example by the length of the CPU registers), arbitrary precision numbers can vary in size, using dynamic memory.
If fractions are involved, the denominator and numerator can be represented with arrays; or use a fixed-point notation by storing the decimal digits with the desired precision; or use a floating-point format with a mean multiplied by an exponent. History and implementations
Arbitrary precision was first implemented in MacLisp. Later, the VAX / VMS operating system offered arbitrary precision capabilities as a collection of functions that operated with strings. Today, bignum libraries are available for most commonly used programming languages. There are even languages specifically designed for calculation with arbitrary precision, such as the bc programming language. All computational algebra systems implement bignum facilities. Applications
A common application is public key cryptography, whose algorithms often use arithmetic with integers of hundreds or thousands of digits.
It is also used to compute fundamental mathematical constants such as pi with millions or more digits and to analyze their properties.
wiki