Introducing Cryptography

Tomcat

Professional
Messages
2,658
Reputation
10
Reaction score
651
Points
113
1. General concepts

Cryptography plays a central role in the development of card technologies. In applications related to transaction processing, the tasks of protecting information from unauthorized access, typical for various fields of technology, are solved, which include:
  • ensuring the integrity of information (the impossibility for a third party located between the participants in the information exchange to modify the transmitted information in such a way that the receiving party does not notice it);
  • ensuring the confidentiality of information (the impossibility for a third party located between the participants in the information exchange to receive the information contained in the messages transmitted between the participants);
  • Authentication of the information source (confirmation that the transmitting party is the author of the transmitted message, that is, the impossibility for a third party to appropriate the authorship of the message on behalf of the information exchange participant);
  • notarization of information (the impossibility to refuse the authorship of the message, in card applications this property is sometimes called non-repudiation - the impossibility of refusing the result of the operation).

To solve the listed tasks, various cryptographic algorithms are used, otherwise called encryption algorithms. The encryption algorithm defines a one-to-one conversion that cards the set of possible messages from the sender, the content of which must be hidden from a third party, into a set of messages, also called cryptotexts, that are understandable only to the sender and the addressee.

All encryption algorithms are divided into two classes:
  • symmetric encryption algorithms;
  • asymmetric encryption algorithms.
Symmetric encryption algorithms are based on the use of a shared secret, called a key , by both parties . Knowledge of the key X completely determines the cryptographic transformation Z = E X (Y), which is also called the secrecy of the message Y. This transformation is one-to-one, that is, there is a function E '^ Z) such that for any Z and Y connected equality Z = E X 6Y), it is true Y = E ~?). The reverse transformation is often referred to as decrypting or decrypting message Z.

Symmetric algorithms appeared in ancient times to classify important messages. Already the famous Greek historian Herodotus (5th century BC) gave examples of letters that could only be understood by the sender and addressee. The Spartans used a mechanical device with which important messages were written in a special way, ensuring the secrecy of the message.

The most reliable symmetric cryptographic algorithm is the Burnham code. The essence of this algorithm is that for each sent message X, represented as a sequence of binary zeros and ones, a sequence of 0s and 1s of the same length as the message being sent is randomly generated. In this case, the members of the sequence are independent random variables that take values 0 or 1 with the same probability 0.5. The random sequence plays the role of one-time keys in the Burnham scheme. Then the cryptographic transformation consists in the bitwise addition modulo 2 of the values of X and Y.Obviously, in order to "crack" the described cryptographic algorithm (i.e., to determine Y), it is necessary to enumerate all possible values of the key X.


The obvious disadvantages of the Burnham scheme are the need to transmit the key value to the recipient for each encrypted message, as well as a variable key length equal to the length of the encrypted message. Of course, the different key values could be renumbered and transmitted once securely to the recipient (for example, by passing the file with the key values from the sender to the recipient). After that, at the end of each encrypted message, you can tell the recipient which key value was used by the sender to classify the message. But at the same time, it is obvious that the volume of possible key values should be commensurate with the volume of information exchanged between the parties, and this leads to the fact that the Burnham scheme is practically not used (today this scheme is used to transfer very important secret information).

Other symmetric algorithms are based on the principle of reusing one key of a relatively small size. Such algorithms make it possible to transform various messages Y 2 , Y n in such a way that, even knowing the values of the function Z t = E x (Y t ) (i = 1, ..., u) for a sufficiently "large" number of messages n, it is impossible determine the value of the key X.

The essence of asymmetric algorithms (schemes based on such algorithms are called cryptosystems with public or public keys) is as follows. In mathematics, such functions E are known for which the inverse function D is rather difficult to calculate if some parameter is not known (in cryptographic schemes, this parameter becomes a secret key). Function E is available to anyone who wants to send a message intended for the owner of the parameter. To encrypt the information sent to the owner of the parameter, it is enough to apply the known (open) transformation E to the transmitted message . After that, the owner of the parameter, using the inverse transformation D(closed transform), which is based on the stored parameter, easily decrypts the received message. On the contrary, a person who does not possess the coveted parameter "will not be able" to calculate the inverse transformation and, therefore, restore the transmitted message. The notion "fails" has a fairly definite meaning, which will be explained below when considering specific cryptographic algorithms.

Asymmetric algorithms are an ideal mechanism for solving problems of ensuring the integrity of transmitted information and authenticating its source. These tasks are solved using public key schemes as follows. First of all, to the transmitted message Y (below, without loss of generality, it is assumed that message Y has a binary representation), an open transformation H (Y), called a hash function, is applied and cards the message Y to a binary sequence of a fixed length. The hash value for a particular message is called the digest or hash code of that message. The set of possible values of the hash function will be denoted by M.

Obviously, the transformation H (Y) is not one-to-one, since the set of values of Y is infinite (note that this set is even uncountable), while the cardinality of the set M is bounded.

After for some message Y its digest H (Y) is calculated , the closed transformation D (H (Y)) is applied to the latter, the parameters of which are known only to the sender of the message. The value s = P (N (Y)) is called the electronic digital signature (EDS) of the message Y.

To verify (verify) the digital signature, the receiving party applies an inverse open transformation E to s and compares the resulting value = E ($) with the value h 2 = H (Y '), where Y' is the received message. In general, the received message Y ' may differ from the transmitted message Y (for example, due to an attempt by a third party to distort the transmitted message).

If = h 2 , then the digital signature is correct. By checking the digital signature, firstly, the source of information (the transmitting side) is authenticated, since only the transmitting side knows the secret of the transformation O (H (Y)), and secondly, the integrity of the received message is confirmed (Y = Y ').

Из описанной выше конструкции ЭЦП следует, что хэш-функция должна удовлетворять двум основным требованиям: необратимости и свободе от коллизий. Поясним эти требования.

The irreversibility of H (Y) means that for any value of IM, it is difficult to choose Y such that H (Y) = L. This property is also called the one-sidedness property of the function H (Y). The need for this property follows from the fact that if it were not fulfilled, the fraudster located between the sender of the message Y and its recipient, intercepting the message Y and his signature 5, could find such Y ' that Y Y' and O (H (Y ')) = $. By passing the pair Y 'and $ to the recipient of the message, the fraudster would thus alter the message of Y in such a way that the recipient would not detect it. In this case, the fraudster does not need to know the secret key of the sender of the message, with the help of which the EDS of the message was created.

The freedom from collisions means that it is difficult to choose different values of yj and Y 2 so that the equality HtYJ = H (Y 2 ~) holds .

The fulfillment of the second property is a means of fighting insider attacks, when the sender of the message selects the message Y for sending to the recipient in such a way that he can pick up the message Y ' with a different meaning (Y Y'), but with the same digest. In this case, the sender can always deny the attribution of the sent message Y, claiming that the message Y 'was actually sent .

Obviously, the properties of irreversibility and freedom from collisions do not follow from each other. For example, the function H (Y) may be irreversible, but at the same time there may be algorithms that allow constructing collisions for this function. A good illustration of this can be the recent results that showed the possibility of constructing collisions for the well-known and widely used MD5 and SHA-1 hashing algorithms.

For the first time, the vulnerability of hash functions, consisting in violation of the property of freedom from collisions, was demonstrated in August 2004 by cryptanalysts from China. This was done using the example of the widely used (for example, in the SSL algorithm) hash function MD5. At the same time, it was pointed out the possibility of applying the approaches used in the works of Chinese specialists to the SHA-1 hash function, adopted by the US National Standards Institute as a tool for creating an EDS.

However, the attacks described by Chinese scientists were viewed only as an academic experiment that was not applicable in practice. In 2005, German cryptographers Stefan Lucks from the University of Mannheim and Magnus Daum from the Ruhr University proved the error of this opinion, demonstrating in the course of an experiment the possibility of real substitution of documents certified by an EDS.

The situation that has arisen has already puzzled information security specialists. Perhaps in the short term we will witness the replacement of the MD5 and SHA-1 algorithms with other hash functions. If we imagine the popularity of these algorithms, then the scale and complexity of solving such a problem will become clear.

Necessity of irreversibility properties and freedom from hash collisions imposes a limitation not only on the shape of the function h (Y), but on the cardinality M cardinality of the set M must be large enough (in practice uses the value of H (Y) in length from 128 to 256 bits, the length of the hash value is task specific).

The property of freedom from collisions results in the following poorly formalized properties of the transformation H (Y):

• messages Y are “uniformly” cardped using H (Y) into elements of the set M, that is, each element of the set M corresponds to approximately the same number of messages Y cardped to this element and belonging to the set A of binary sequences.

““ ~ G | A | ... ...

nostae of limited, but "large" length (approximately where | A |,

| M | - the cardinality of the set A and the set of values of the hash function, respectively);

• the function H (Y) should not be continuous, that is, from the "proximity" of the values of Y] and Y 2 does not follow the "proximity" of the values and H (Y 2 ).

It is easy to show that from the property of uniformity of the cardping H (Y) it follows that the probability that for any randomly selected messages Y g and Y 2 the equality NMSD = H (Y 2 ) is satisfied , is equal to M | In the case when the function H (Y) cards messages into the set of all binary sequences of length n, the probability of the event NAMS = H (Y 2 ) is equal to 2 "". The length of the hash function value in practice is determined by the probability that the hash function values from two randomly selected messages are not equal to each other. For example, when n = 160 this probability is approximately 0,68 • 10 -48 , which makes H event (UD = H (Y 2) is almost incredible.

Consider now m <N = 2 " different messages for which the value of the hash function is calculated. Assuming that all values of the hash code are equally probable, we can estimate the probability P (t) that all hash codes corresponding to these messages , are different.

It is easy to see that the probability P (t) is determined by the equality:

D r l fT N - l + 1

P (t) = P ^ -

100.png

From the inequality 1 - x <e x and the previous equality it follows:

m (ml) 2N

In other words, the probability P (t) becomes "tangible" at ~ 1 or m ~ 2n / 2 . Therefore, the longer n, the lower the probability of collision.

Now we will consider how symmetric and asymmetric algorithms are used to solve the problem of ensuring the security of information exchange. As a rule, asymmetric encryption algorithms are used to solve the problems of authenticating the source of information and ensuring its integrity. These algorithms, by their nature, are designed to solve the above problems in information exchange systems with a large number of users (closed conversion for signature and open conversion for signature verification).

To solve the problem of ensuring the confidentiality of transmitted information, symmetric algorithms are usually used. However, if the open and closed transformations in the asymmetric algorithm are defined on the same set of messages and are commutative, i.e., the equality E (D (Y)) = D (E (Y)) holds, then the asymmetric algorithm can also be used to encrypt messages. Indeed, the sender transforms message Y into message E (Y) using an open transform of the addressee of the message. Then the reverse transformation can be performed only by the recipient of the message, which means that the content of Y will be known only to the addressee, which is the solution to the problem of ensuring the confidentiality of information exchange.

It should be noted that some well-known asymmetric algorithms (for example, the most famous RSA algorithm) have the commutativity property. However, in practice, the property of "encryption" of such asymmetric algorithms is used, as a rule, only for the two parties of the information exchange at the beginning of the dialogue to exchange a symmetric encryption key with each other, which is then used to encrypt messages within the dialogue. This circumstance is due to the fact that with an equal degree of protection provided by symmetric and asymmetric algorithms, the former work 2-4 orders of magnitude faster than the latter. The speed of the algorithm (computational complexity of the algorithm) is a key factor in many information exchange systems,

The combined use of symmetric and asymmetric encryption algorithms is widely used in the EMV standard.

Currently, cryptographic methods are the subject of national and international standardization. For example, Russia and the United States have three national standards in the field of cryptography: an encryption algorithm standard, a digital signature standard, and a hash function standard.

Cryptographic information security tools defined by ISO standards are based on the same three types of algorithms, as well as algorithms for public key distribution and asymmetric encryption.

ISO standards generally do not define a specific encryption algorithm. ISO 8732 and ISO 10116 standards regulate block cipher encryption modes. The ISO 10126-1 standard defines the general principles for encrypting messages in banking transactions, and the ISO 10126-2 standard is a DES algorithm for use in banking.

The ISO 14888-3 standard defines two broad classes of EDS schemes, the security of which is based on the assumption that it is difficult to solve problems of factorization and discrete logarithm in a finite commutative group. The first class is satisfied, in particular, by the RSA scheme, the second - by the Schnorr scheme and all possible variants of the El-Gamal scheme. These algorithms will be discussed in more detail below.

Thus, the ISO 14888 standard covers almost all actually used EDS schemes. At the same time, the standard does not impose requirements on the parameters of the circuits. As a result, many of the schemes defined by the standard include algorithms for which EDS can be counterfeited using relatively small computing resources.

The ISO standards for calculating the message hash function are more specific. Specifically, the ISO 10118-3.4 standards define SHA-1, PIPEMD-128, PIPEMD-160, MASH-1, and MASH-2 hash functions.

ISO standards are mainly advisory in nature on the choice of various cryptographic algorithms, in most cases without specifying specific requirements for their strength.

2. A quick overview of symmetrical

encryption algorithms

Let's start our review with the world's most popular symmetric encryption algorithm - DES (Data Encryption Standard).

DES was developed by IBM and adopted in 1977 by the National Institute of Standards and Technology as the US Government standard for encrypting less-than-top-secret information. ). Since then, it has been re-certified as such a standard every 5 years up to and including 1993. In 1998, the US National Institute of Standards and Technology refused to certify DES because the state of the art made it possible to crack DES with relatively cheap computing power.

DES is a so-called "block cipher" (when the encrypted information is processed in blocks of fixed length, in the case of DES, the block length is 64 bits) and has a 56-bit key (the key is represented by a 64-bit binary sequence, which is obtained from a sequence of key bits by adding after each

7 bits of the odd check bit key; thus, in the binary representation of the key in positions 8, 16, 24, ..., 64 there are odd-check bits).

The DES algorithm is based on numerous nonlinear transformations (permutations, substitutions, shifts, and S-transformations) performed on individual elements of the encrypted block. Such transformations can be described by a system of nonlinear equations, the solution of which is an NP-complete problem (there is no known deterministic solution polynomial in complexity).

Let's very schematically describe the operation of the DES algorithm. First, a 64-bit block of encrypted information w is subjected to an initial fixed permutation: each bit of w occupies a position given by a special table defined in the DES standard. The resulting block W is represented in the form W = 1 (0) || R (0), where? (0), R (0) are the first (left) and last 32 bits of the W block , sign || denotes the concatenation (connection) of blocks.

DES algorithm is recursive. Having obtained for some and (1 < n <16) the values of the blocks L (n - 1), R (n - 1), we define the blocks L (n), R (n) using the following equalities:

? (n) - R (n - 1)

R (n) = L (n - 1) © f (R (n- l), K (n)), (Bl)

where ® denotes modulo 2 bitwise addition, the function / is defined below, and K (n) is a 48-bit sequence obtained from the DES key using a fixed set of permutations, shifts and substitutions defined in the DES standard. The cryptotext of the encrypted block w is a block? (16) || R (16).

It is obvious from (Bl) that the decryption of the cryptotext is carried out using the following set of equalities:

R (n - 1) =? (N)

L (n - 1) = R (n) ® / (? (N), K (n)),

for 1 < n <16. After calculating the block values? (0), R (0) using these equalities, decryption of the initial block is completed.

The function f (x, y), where x is a binary variable with a length of 32 bits, and y is a variable with a length of 48 bits, has a range of admissible values of all possible sequences of 32 bits in length and is constructed as follows. The variable x is "expanded" to a variable Xj of 48 bits using the following DES table of 6 columns and 8 rows (the numbers at the intersection of the rows and columns of the table are called table entries; each table entry takes a value from 1 to 32):

3212345
456789
8910111213
121314151617
161714191221
122122232425
242526272829
2829thirty31321

"Expansion" of the variable x is done as follows. All bits of the variable x are numbered from 1 to 32. In the above table, instead of each of its elements, the value of the bit with the number equal to the value of this element is put. The result is a table of 6 columns and 8 rows, whose elements take on the values 0 or 1. Let us now represent the resulting table as a string of 48 bits in length. To do this, take the first row of the table, storing the sequence of elements in it, followed by the second row, and so on up to and including the eighth row of the table. The result is a 48-bit string representing the variable x P

Further, the variable Xj is added bitwise modulo 2 with the variable /. The resulting block B of 48 bits is divided into eight 6-bit blocks B = B x || B 2 || B 3 || B 4 || B 5 || B 6 || B 7 || B 8 . In turn, each of these blocks is converted into eight 4-bit blocks with special nonlinear transformations ... Sp, S 8 .Each S-transform is specified by a DES-defined table of 4 rows and 16 columns. The elements of the table are decimal integers ranging from 0 to 15.

Consider as an example a table for converting S 7 . According to DES, the table looks like this:

4112141508133129751061
1301174911014351221586
1411131237141015680592
6AND1381410795015142312

MasterCard to A

Table rows are numbered from top to bottom from 0 to 3, and columns are numbered from left to right from 0 to 15.

Then the transformation S 7 , which cards the block B 7 (a sequence of 6 bits Z b Z 2 , Z 6 ) in A 7 , is constructed as follows. Determined

two numbers 0 < ST <3 and 0 <CL <15, the binary representations of which are respectively equal to ST = (ZjZ 6 ), CL = (Z 2 Z 3 Z 4 Z 5 ). Next, an element located at the intersection of row ST and column CL is selected from the table . Recall that the elements of the table are numbers from 0 to 15. Therefore, 4 bits are sufficient for the binary representation of any number in the table. The binary representation of the selected element of the table is the 4-bit sequence A 7 .

The definition of the function f is completed by applying the permutation defined in the DES standard to the 32-bit block HELL | A 2 || A 3 || A 4 || A 5 || A 6 || A 7 || A 8 .

DES has a number of interesting properties. The first property concerning symmetry is almost obvious and consists in the fact that if all 0s in the encrypted block and the DES key are replaced by 1 and vice versa, then the result of encryption will be a block that coincides with the original cryptotext, all the bits of which have undergone the inversion procedure. Indeed, DES uses only the operations of permutation, substitution, shift and addition modulo 2, which do not depend on how the digits 0 and 1 are "called".

The second property is called the avalanche effect and is highly desirable from the point of view of secrecy: a slight change in the original message or key leads to significant changes in the cryptotext.

DES was first published in 1973, and since then so many different articles and sections in special books on cryptography have been written about it all over the world that it would seem that it should have been "opened" long ago. However, for a long time, not only this cipher was not “broken”, but, in fact, even the assessment of its cryptographic strength did not decrease.

Several DES attacks are known today. The first and universal method consists in a complete enumeration of all possible key variants and checking them for correct decryption until the true value is obtained. In the case of DES, it is necessary to enumerate 2 56 (or approximately 7.2 x 10 16 ) possible key options.

Of course, the progress of computer technology in recent years has been so significant that an enumeration of all possible variants of the key

DES is a solvable problem for the foreseeable future. There are two known successful attacks on DES, committed back in 1999 using computers connected to the Internet (open project). In the first case, the key was compromised in about 3 months, and 85% of all possible key values were analyzed to find it. In the second case, the key was opened in 6 weeks, and this required iterating over about 25% of all key values. In addition, there is a known case when a computer built for the money of the Electronic Privacy Information Center and consisting of 1728 processors capable of enumerating 88 billion key variants per second, cracked DES in 56 hours of operation.

As a result, DES is not considered reliable today, and Triple DES (another designation is 3DES) is proposed as the simplest alternative, using a 112-bit key (double-key) or 168-bit (triple-key).

Another DES attack is called differential cryptanalysis. It reduces the number of keys scanned, but generally requires kriptotekstov 2 47 values selected encrypted blocks. The method of differential cryptanalysis in practice turned out to be difficult to implement due to the overly complex requirements for the open blocks selected for encryption.

Differential cryptanalysis ideas have found application in attacks on microprocessor cards called Differential Fault Attack. In these attacks, it is possible to introduce changes (errors) in the results of calculations performed on a certain cycle of the DES algorithm. In this case, the method of differential cryptanalysis makes it possible to reduce the enumeration of possible DES key values to quite a modest value that is easily implemented in practice. For more information about using the method of differential cryptanalysis in attacks on a microprocessor card, see section 2.8.

Another attack, known as linear cryptanalysis, recovers the DES key by analyzing 2,43 plain texts. Experimental linear cryptanalysis of the DES algorithm was first successfully implemented on workstations 12НР9735 and took 50 days.

An important advantage of DES is its high performance. So DES is faster than the RSA algorithm (see below) of the same cryptographic strength as DES (for this, the key length in RSA should be approximately equal to 384 bits) by 100 times if the software implementation of both cryptoalgorithms is used, and 1000-10000 times if implementation of algorithms in specialized computing devices called Hardware Security Module is used.

Even a software implementation of DES on a personal computer can encrypt data at a speed of about 1 Mbps. DES implementation on microprocessor cards using a special coprocessor takes only 2-8 microseconds.

As noted, in 1998, the National Institute of Standards and Technology refused to certify DES as a US Government standard. After several years of discussions, the American National Institute of Standards and Technology, on October 2, 2000, approved a new standard for the Advanced Encryption Standard (AES) block symmetric encryption algorithm instead of DES. The AES standard is based on the Rijndael block cipher algorithm.

The new standard has a good chance of becoming international, if not de jure, then at least de facto. First, it is based on an algorithm adopted on the basis of an open competition, in which algorithms proposed by mathematicians from many countries of the world participated. Secondly, the winning algorithm was developed by Belgian cryptographers Vincent Ramen and Yon Damen (the algorithm is called Rijndael after the first letters of its authors' surnames, in Flemish transcription it is pronounced like "Randal"). Rijndael's Belgian, not American, origins should help AES gain acceptance in Europe, which has long been suspicious of DES.

The Rijndael algorithm has been subjected to meticulous analysis by the National Institute of Standards and Technology and the US National Security Agency, as well as numerous other laboratories. However, no one was able to identify vulnerabilities in it.

The Rijndael algorithm can work with keys with lengths of 128,192 and 256 bits, and therefore it is protected from brute-force attacks for the foreseeable future. In addition, the algorithm combines high performance with moderate memory requirements for its implementation. Therefore, it can be implemented in a wide variety of devices, including mobile phone SIM cards and bank smart cards. And finally, the Rijndael algorithm is not protected by patents and is available for free use in any product and system.

In recent years, the symmetric algorithm Triple DES has become widespread as a replacement for DES. This algorithm typically uses a two-key DES key and has three steps.

At the first step, the 64-bit block is encrypted using the first key and the DES algorithm. At the second step, using the second key and in accordance with the DES algorithm, the cryptotext obtained at the first step is decrypted. In the last third step, the decryption result obtained in the second step is again encrypted using the first key and the DES algorithm. The resulting 64-bit block is Triple DES cryptotext. Thus, the Triple DES algorithm requires the use of DES three times, which is where its name comes from.

Triple DES with a double key has become the de facto standard in the banking industry. Payment systems recommend it as a symmetric encryption algorithm for encrypting PIN blocks and other sensitive transaction data. The attractiveness of the algorithm stems from the fact that the investment of banks and banking solution providers in support of Triple DES was minimal, since the algorithm is based on the use of DES, which they already supported.

Obviously, brute-force attacks for Triple DES with a 112-bit key will need to check 2 112 (or approximately 5.2 x 10 3 e ) different key values.

In many known implementations, the encryption speed of the Triple DES algorithm is approximately 1.5-2.5 times lower than the performance of DES.

Following the example of the United States in developing an open national encryption standard, in 1989 the state standard for data encryption for computer networks was adopted in the USSR. It received the designation GOST 28147-89 and was stamped "For official use" until the end of the existence of the USSR itself. In Russia, it has been officially adopted since 1992 as a data encryption standard along with other former USSR standards. The standard was formally declared fully "open" in May 1994.

The GOST 28147-89 standard, like DES, is a block cipher. The length of the information block is also 64 bits. The key length is equal to 256 bits, and there can be no question of any practical possibility of an attack on the GOST 28147-89 algorithm, associated with enumerating all valid key variants, not only today, but in the coming decades.

The encryption speed for the hardware implementation of GOST 28147-89 is 1.1 MB / s, and can be increased to 7-8 MB / s. Comparison of the performance of the implementation on Pentium processors shows that the speed of the GOST 28147-89 encryption algorithm is approximately 2-3.5 times lower than the encryption speed of the Rijndael algorithm.

Around the same time (1989), an alternative to the DES algorithm was developed and published a draft of the open national data encryption standard of Japan, which received the designation FEAL. It is also a block cipher, using a 64-bit block of data and a 64-bit key. However, neither he nor any other algorithm has yet been adopted as the national encryption standard in Japan.

In 1990, K. Lay and J. Massey (Switzerland) proposed a draft international data encryption standard, which received the designation IDEA (International Data Encryption Algorithm), which is highly regarded in the international cryptographic community, and in recent years by the efforts of international standardization organizations (primarily European) was nominated for the role of the official pan-European standard for data encryption. The IDEA algorithm key length is 128 bits to encrypt a 64-bit block. As will become clear below, the algorithm will remain tamper-resistant for the next several decades.

The IDEA algorithm uses three groups of operations - bitwise addition mod 2, addition mod 2 16 , multiplication mod 2 16 + 1. Operations are performed on blocks of 16 bits in length, resulting from dividing the encrypted block by 4 subblocks. The algorithm is cyclical - 8 transformation cycles are used.

Today the IDEA algorithm is patented in the USA and most European countries. The patent holder is Ascom-Tech. The non-commercial application of the standard is free of charge.

Some cryptoalgorithms (in particular, DES, IDEA), when encrypting using certain keys, cannot provide the required level of cryptographic strength. Such keys are called weak keys. There are 4 weak and 12 semi-weak keys known for DES. And ho-
  • 16
  • 16, in serious crypto
While the probability of hitting them is ~ 2 • 10 graphic systems, this cannot be neglected.

The cardinality of IDEA weak keys is 2 51 . However, due to the fact that the cardinality of all IDEA keys is 2 128 , the probability of hitting a weak key is about 3 x 10 7 times less than in the case of DES.

IDEA is faster than Triple DES but slower than DES. The encryption speed of the algorithm in the case of its software implementation on a Pentium-200 computer is about 15 Mbit / s.

The RC2 and RC4 algorithms are variable key length ciphers for very fast encryption of large amounts of information (developed by Ron Rivest). These two algorithms are faster than DES and can increase security by choosing a longer key. RC2 is a block algorithm and can be used as an alternative to DES. The RC4 algorithm implements a stream cipher and is nearly ten times faster than DES. The 128-bit RC2 and RC4 algorithms provide the same level of security as IDEA. Algorithms RC5 and RC6 have appeared relatively recently, which are the development of the mentioned algorithms RC2 and RC4. The RC6 algorithm was even nominated for a new Advanced Encryption Standard. In the nominated version, the RC6 block size was fixed at 128 bits, the number of cycles was 20,

The results of a comparative analysis of the algorithms discussed above are shown in Table B1.

Tab. IN 1. Results of a comparative analysis of symmetric encryption algorithms

AlgorithmCrypto resistancePerformance (Pentium-200), MbpsKey length (bits)
DESlow3056
3DESgood12112
IDEAgood15128
GOST 28147-89high8256

3. A brief overview of asymmetric encryption algorithms

The asymmetric encryption algorithms used in practice are mostly based on the complexity of solving one of the following mathematical problems:

• problem of factorization (factorization) of a large number: multiplication of two large numbers is a problem of polynomial (in the size of the largest factor) complexity. In this case, the complexity of solving the inverse problem - decomposing a large number into factors - is also determined by a polynomial, but on the size of the number being decomposed into factors, and because of the value of this number for the MasterCard to A

giving factorization is computationally extremely laborious. The RSA algorithm is based on the computational complexity of solving the factorization problem;
  • the problem of finding the discrete logarithm. From the point of view of computational complexity, it is quite easy to perform the exponentiation operation in a finite field, but to solve the inverse problem - finding the discrete logarithm - it will take almost a complete enumeration of all elements of the field. The DSA, EGSA, Diffie-Hellman algorithms are based on the complexity of solving the logarithm problem in a finite field;
  • decoding problems for some error-correcting codes. It is quite easy to obtain a codeword (multiply matrices), but finding the base word by the codeword is a computationally laborious task. This method is rarely used in practice (the McEliece cryptosystem using Goppa codes is known).
The first and most famous digital signature system in the world was the RSA system, the mathematical scheme of which was developed in 1977 at the Massachusetts Institute of Technology (USA) and patented in the USA in 1982. It is named after the first letters of the authors' surnames: R. Rivest, A. Shamir, L. Adleman.

In the most general terms, the RSA system is as follows (it is recommended to familiarize yourself with the contents of Appendix A before reading the rest of this appendix, which provides in an accessible form and at the same time with sufficient completeness the mathematical foundations necessary to understand asymmetric encryption algorithms). Let p and q be two different large primes. We denote n = pq. Then the Euler function for n = pq is equal to φ (n) = (p - 1) • (q - 1).

Choose a large integer d coprime to (p - 1) • (q - 1) and define 1 < e < φ (n), satisfying the comparison:

f • d = 1 (mod φ (n)).

The number e is by definition the reciprocal of d and, as explained earlier, is found using Euclid's algorithm. This requires O ((log and) 3 ) operations.

The numbers n, e and d are called respectively the modulus, encryption exponent and decryption exponent. The numbers n and e form the public key, while the remaining numbers p, q, φ (n) and d are secret. In practice, only the decryption exponent d should be left as a secret key , and the numbers p, q, φ (n) can be eliminated. For microprocessor cards this remark is also important from the point of view of saving the memory of the card.

In what follows, we will use the following notation: y = x (mod u) if x is the residue of the number y modulo n (see representation (A3) in Appendix A).

The procedure for encrypting the original text x (0 < x <n) in the RSA scheme is determined by the equality

y = E (x) = x e (mod u),

and the decryption procedure is equal to

D (y) = y d (mod u).

Let us prove that, indeed, E> (E (x)) - D (y) - x. Since e • d - = C • φ (n) + 1 for some integer C, the equality

x ^ d _ ^ .C

Then from the earlier proved Euler's theorem under the assumption (x, n) = 1 the equality follows:
x c h , (n) + 1_ x ( modn)) ( B2 )
Q.E.D.
The condition (x, u) = 1 is not restrictive for the fulfillment of (B2). Indeed, if n | x, then (B2) is obvious. If only one of the primes p or q divides x, for example, p | x, then x = pt and (t, q) = 1. Then it follows from Fermat's little theorem:
(pt) 4 ' 1 = 1 (mod q) and (pt) (p " 1) (4_1) = 1 (mod q).
From the last comparison we find that pq | (pt) (p 1) (<, 1) - 1, that is, the equality holds:
(pt) (p " 1) ( = 1 (mod pq) or x f (n) = 1 (mod pq).
Thus, equality (B2) has been proved for any values of x.
To find the two large primes p and q used in the RSA scheme, an odd integer r of a suitable size (say, 200-bit) is arbitrarily chosen and checked for simplicity using the Solovey-Strassen tests described below. If r does not pass the simplicity test, the number r + 2 is subjected to the verification procedure, and so on.
According to Chebyshev's theorem on the distribution function of the number of primes, there are approximately 1О 2 °° / 1n (10 200 ) - 10 199 / 1n (10 199 ) 200-digit primes (here Inz denotes the natural logarithm of the number z). If this number is compared with the number (10 200 - 10 1 ") / 2 of all 200-bit odd numbers, we get that the probability of success for a particular test is approximately 0.00434.

As follows from the description of the RSA scheme, forward and backward transformations are commutative, which means that the RSA algorithm can be used both to sign data and to ensure its confidentiality. In the first case, the decryption exponent is used to sign the data, in the second case, the encryption exponent.
The commutability property of the RSA scheme is widely used in practice, including in the EMV standard. In EMV, to encrypt the PIN block by the terminal when it is transferred to the card, the public encryption key of the card is used to verify the cardholder using the Encipherment PIN Offline method.
The company EMVCo, which owns the rights to the EMV standard and is engaged in its development, expects by 2025-2030. find an alternative to the RSA standard. By this time, to ensure the required cryptographic strength, it will be necessary to use long RSA key modules, which will negatively affect the speed of transactions. One of the main problems in choosing an alternative asymmetric encryption algorithm is the requirement for such an algorithm to be commutative.
However, it should be noted that the RSA algorithm is still primarily used in digital signature schemes. Due to the low encryption speed (about 30 Kbps with a 512-bit key on a 2 GHz processor), much faster symmetric encryption algorithms with a random session key are usually used to encrypt messages. The RSA algorithm is only used to encrypt this session key.
The RSA algorithm uses the secret exponent d to encrypt data . Let the binary representation d have the form:
d = d ^ + ... + d k + 1 ,
where dj = 1, di = 0 or 1 for i = 2, ..., k + 1, k = [logd], [x] is the integer part of the number x. Let us show that calculating the degree С 1 (mod u) requires O (logd (logn) 2 ) elementary operations or, taking into account the closeness of the values of d and n in practice, it is necessary to spend O ((logn) 3 ) elementary operations.
We first show that for calculating the degree of C 1 (mod p) requires O (logd) multiplications of two numbers modulo n. This is achieved by successive squaring instead of the conventional multiplication recurring C on C with i = 1, ..., d - 1. The algorithm of the sequential squaring method can be described as follows:
Let = 1. For I = 1, to 4-1, step by step, we perform two steps:
  • 1) if d ; = 1, then R, = • С (mod u); if d ; - 0, then R, - s ; ;
  • 2) s / + 1 = Ri (mod u).
fc + 1 * i.
Then it is easy to understand that С 1 (mod n) = R k + 1 . Indeed, C d = t = i . If d, = 0 holds for some i , then the corresponding factor to the power of C 1 is 1. If d, = 1, then the i-th bit in the representation d will appear at the i-th step of the algorithm and the corresponding power 2, as follows from description of the above algorithm for sequential squaring, by the end of the algorithm will be equal to k + 1 - i.
It is also clear that multiplying two numbers mod n requires O ((logn) 2 ) elementary operations, which implies that O (log d (log u) 2 ) operations are required to calculate the degree of C d (mod u) .
Usually, in practice, the decryption exponent is chosen small in size (sometimes the decryption exponent is the same for entire groups of users). For example, in the EMV standard, the value of e can take on only two values: 3 and 2 16 + 1.
This choice of an open exponent makes the encryption procedure faster than the decryption procedure. As just shown, the decryption procedure requires O ((logn) 3 ) onepa4Hft, and the encryption requires O ((logn) 2 ) operations.

It is also important to obtain an estimate of the computational complexity of generating the buried and public key of the RSA algorithm. To do this, we first estimate the average number of steps required to find two large primes p and q such that p ~ q ~ y / n. To find a prime number, the Solovey-Strassen tests are used, which are based on the result of the theorem on quadratic residues presented in Appendix. A. To check the number m for simplicity, an odd number r <m is chosen at random such that (r, m) - 1. If the last equality is not satisfied, then it immediately follows that the number m is composite, and on this check the simplicity of the given m ends.
After the number r is chosen, using the reciprocity law (see Appendix A), the value of the Jacobi symbol CD is calculated. You can show
(d) t,
that the computational complexity is the same as that of Euclid's algorithm - O ((logn) 3 ).
Further, in accordance with the theorem on quadratic residues, it is necessary to calculate the number r (m-1) / 2 (mod m) and check that in our case it is equal to
This requires O ((logn) 3 ) elementary operations.

After the verification of the necessary condition for simplicity m with the help of a randomly chosen number r is completed, another random number is selected, with the help of which the necessary condition for simplicity m is checked again . The verification of the necessary condition for simplicity m is performed I times. Moreover, the probability p that after the checks carried out the number m is still composite satisfies the inequality p < 1 - 2 " ( This follows from the result proved in Appendix A that if m is a composite number, then less than half numbers coprime to m,satisfy the condition of the quadratic residue theorem. In practice, the value of I is selected from considerations of ensuring the smallness of the value of p.
Since, according to the Chebyshev theorem on the distribution function of the number of primes, the distribution density of primes in the vicinity of the number m is equal to (log m) " 1 , we need to iterate over O (logm) numbers m, for each of which we will have to perform O ((logm) 3 ) operations for checking their simplicity. it follows that the total complexity of generating RSA keys procedure is O ((logm) 4 ) elementary operations. Given the fact that the sought number m ~ 4n, as well as the need to choose two prime numbers p and q, the total complexity of the procedure generating keys for the RSA algorithm is O ((logn) 4 ) elementary operations.
There are various ways to speed up the calculation of the power of C 1 (mod and) in the RSA algorithm. It is widely used, which was proved earlier in the appendix. And the CRT-Chinese Remainder Theorem.
Let c p and c q be the remainders of dividing C by p and q, respectively, a d p and d q be the remainders of dnap-lnq-l, respectively. In other words, the equalities take place:
with p = C (mod p);
c q = С (mod q);
d p = d mod (p -1);
d q = d mod (q -1).
Then, obviously, comparisons are made:
C d = Cp " (mod p);
C d = Cq 4 (mod q). (OT)
From these comparisons, using the Chinese Remainder Theorem, we find that
C d = - c ^ p ' 1 • p + c dp (mod pq), (B4)
where p ' 1 satisfies the relation p ” 1 • p = 1 (mod q). Note that the number Pq 1 is calculated once and is then used for any values of C.
To calculate the right-hand sides in the above comparisons (V3), as noted above, O ((logVn) 3 ) = gO ((log and) 3 ) operations are required . Thus, using the Chinese remainder theorem, calculating the degree of C d (mod u) requires about four times fewer operations than in the case of direct exponentiation.
At the same time, the use of CRT when implementing the RSA algorithm in microprocessor cards without appropriate chip protection gives fraudsters more opportunities to disclose the numbers p and q with which calculations are made (see section 2.8).

There are two approaches to attacking the RSA algorithm. The first is to try to decompose the number n into prime factors (factorization problem). The oldest and slowest decomposition method, the sieve of Eratosthenes, guarantees the solution of the problem in about (l / n) checks. Asymptotically, the fastest known factorization algorithms require a running time of the order of Q ( e a ' lnnlnlnn ) j where the constant a = 1 + e for an arbitrarily small e.

Another approach to attacking RSA is to find the e-th root modulo n of an encrypted number. Today it is not known that this method can be reduced to a factorization problem, as well as an RSA attack method based on its use is unknown.

In recent years, specialists in computer number theory using very sophisticated methods and powerful computing systems (using the fastest supercomputers such as CRAY-3 or distributed networks of several hundred VAX stations) have been able to factorize integers of 150 decimal places in a fairly short time (in order - several months).
For the current level of development of computing technology, it is believed that RSA cryptosystems with a modulus of less than 512 bits are unreliable, with a modulus of 784 bits - openable by government services, with a modulus of 1024 bits - reliable enough for the next 5-10 years, 2048 bits - reliable for the next decades. In the banking sector, modules with a length of at least 1024 bits are used today.

It should be noted that the value <p (n) in the RSA algorithm is classified as secret data, since knowledge of φ (n) allows us to solve the factorization problem n. Indeed, there is a system of equations:
p + q = n - φ (n) + 1,
p - q = 7 (P + q) 2 -4n,
which has a unique solution for p and q.
Notice the following weakness in the RSA scheme. If there are two participants in the cryptosystem A and B, which use the RSA algorithm with a common modulus n, then each of these participants knows the secrets of the other. Indeed, party B knows is, in , d Bed and , e A (open to all participants of exhibitors known cryptosystems) and and. We show how it can calculate the d A .
We denote g 0 = е в • d B - 1, h 0 = (g 0 , е л ), g, = g 0 / h 0 . Obviously, the equality (g b ∈ A ) - 1 holds . Using Euclid's algorithm, we find integers a and b such that ag } + be A - 1.
We show that f (n) | g P Indeed, by definition e in • d B - 1 = Kf (n), for some integer . To Hence g x = (f a • d B - 1) / S 0 = tap ( n) / h 0 . Since by definition (e A , φ (n)) - 1 and h 0 - (g 0 , e A ), we obtain (h 0 , φ (n)) - 1. The latter means that h 0 1 k and therefore φ (n) | g v
Therefore, from the equality ag, + be A = 1 it follows that be A = 1 (mod φ (n)). Thus, the number b, modulo <p (n) is equal to d A . Finding d A requires O ((log and) 3 ) operations.
You can also prove that there are efficient factorization algorithms with a known value of the closed exponent (decryption exponent).
The RSA algorithm is de facto an international digital signature standard. Practical use of the RSA system has the following disadvantages:
  • • RSA method is protected by US patent # 4,405,829 and for its use in commercial products requires a license agreement with the patent holder - RSA Data Security Corporation (USA);
  • • to ensure high cryptographic strength of the signature, it is necessary to use integers that have a binary representation
  • 1024 bits or more, which requires rather large computational costs (in the case of microprocessor cards, separate cryptographic coprocessors are used, the memory costs of the card are also noticeable);
  • • when generating keys in the RSA system, it is necessary to check a large number of conditions, failure to fulfill each of which allows the possibility of signature falsification;
  • • the RSA method allows, without knowing the secret keys, based on the set of documents at hand and digital signatures to them, to receive signatures on documents created on the basis of the existing ones.
Another common asymmetric encryption algorithm EGSA was developed in 1984 by the American T. El-Gamal. This algorithm is not protected by a patent, which was one of the arguments used in August 1991 by the US National Institute of Standards and Technology in justifying the choice of the EGSA algorithm before the US Congress Commission as the basis for the national standard DSA (Digital Signature Algorithm).
The EGSA signature algorithm works as follows. We consider the field GF (p), where p is some sufficiently large prime number. Let G (q) be a subgroup of the multiplicative group of the field GF (p) of prime order q, g a generator element of the subgroup G (q). Recall that G (q) is a cyclic group (as a subgroup of a cyclic group or because the order of the group is simple).
The secret key for creating the signature is some integer 0 < x <q. The public key of the signature verification is the residue = g A '(mod p).

To create a message signature t, do the following:
  • 1. The hash function of the message being signed h (m) is calculated.
  • 2. An integer random number k is generated such that 0 < k <q.
  • 3. Calculate the residue r - g k (mod p).
  • 4. Find the number s, which is a solution to the equation h (m) = xr + + sk (mod q).
Such a solution always exists due to the fact that (k, q) = 1. The signature for the message m is the pair (r, s).

To check the validity of the signature (r, s) for the received message, the following actions are performed:
  • 1. The inequality r <p is checked . If it fails, the signature is incorrect.
  • 2. The comparison g h (m ^ = yf (mod p) is checked . If this comparison is performed, then the signature is correct, otherwise it is incorrect.
The necessity and sufficiency of the fairness of the last comparison to confirm the authenticity of the signature are obvious. Indeed, YY = g vr + - 5k + C4 for some integer C. Since g is a generating element of a subgroup of order q, g q - 1 (mod p) and therefore the congruence g /, ( '' 1) = yf (mod p) and equality m = m 1 are equivalent.

Compared to the RSA method, the EGSA algorithm has a number of advantages:
  • firstly, for a given level of cryptographic strength of the digital signature algorithm, the integers with which calculations have to be carried out are shorter, which accordingly reduces the complexity of calculations and reduces the amount of memory used;
  • secondly, when generating keys, it is sufficient to determine a large number p, such that the number p - 1 would have a large prime divisor q;
  • thirdly, the encryption procedure using this method does not allow calculating (as in RSA) digital signatures for new messages without knowing the secret key.
However, with all the advantages listed above, the EGSA algorithm also has some disadvantages. In particular, at the same security level, the signature length and verification time turn out to be longer than in RSA. In addition, the repetition of the same random number k during the lifetime of the private key x leads to the disclosure of the latter. To do this, it is enough to solve a system of two linear equations.
The digital signature scheme in K. Schnorr's scheme resembles the EGSA algorithm. We consider the field GF (p), where p is some sufficiently large prime number. Let G (q) be a subgroup of the multiplicative group of the field GF (p) of prime order q, g a generator element of the subgroup G (q).
The secret key for creating a signature is some integer O < x <q. The public key for verifying the signature is the residue y = g x (mod p).

To create a message signature t, do the following:
  • 1. An integer random number k is generated such that 0 < k <q.
  • 2. Calculate the residue r = g k (mod p).
  • 3. Calculate e - h (m || g), where f g (x) is a hash function, m || r - concatenation of m and r.
  • 4. The number $, 0 <s < q, is calculated such that the comparison 5 = to -хе (mod q) is valid . The signature for message m is the pair (e, 5).
To check the validity of the signature (e, 5) for the received message n ?! the following actions are performed:
  • 1. For the signature value (f, s) is calculated g y, 0 < Ku <q, such that compares g at g = s • y f (mod p).
  • 2. Calculate e = / Dt-Ltd.
  • 3. The equality e = e P is checked. If it is true, then the signature is correct, otherwise the signature is incorrect.
The necessity and sufficiency of the last equality to verify the authenticity of the signature are obvious. Indeed, for the number Tj the equality r y - g s + xe (mod p) holds . Since, as follows from the Schnorr scheme, k = s + xe + Aq for some integer A and g q = 1 (mod p), then r y = g k (mod p) = g. Hence, by virtue of the property of the function h (x ) it follows that e = e-y if and only if m = ty, as required.

As in El Gamal's scheme, the repetition of the same random number k during the lifetime of the private key x leads to the disclosure of the latter.
The cryptographic stability of the El-Gamal and Schnorr algorithms is determined by the complexity of solving the discrete logarithm problem in a subgroup of prime order q of the multiplicative group of the field GF (p). Today, the opening of the El-Gamal scheme is possible with the values p = 512 bits, q = 256 bits. In accordance with the predictive studies of cryptographers, one can expect that even without the use of a supercomputer, practical opening of the scheme (determining the secret key by the open one) with parameters p - 1024 bits, q = 256 bits will become possible no later than 2017. Considering that the main characteristics of supercomputers are 10 years ahead of commercially available computers, their use cannot guarantee the impossibility of counterfeiting a 1024-bit version after 2007.

The review would not be complete without mentioning the first asymmetric encryption algorithm, the Diffie-Hellman algorithm. Actually with MasterCard revo to A
luational article by two mathematicians, whose names appear in the name of the algorithm, and the history of cryptographic systems with public keys began. The Diffie-Hellman algorithm is used to solve the problem of secret key exchange in systems with many users.

The essence of the algorithm is as follows. All operations are performed in some finite field GF (p) with a primitive element g. Now consider the two participants cryptosystem A and B. Each of them generates for itself the secret number (respectively, a and b). In addition, each member of the system calculates in the field GF (p) the values g " and g b, respectively. The calculated exponents are public public keys known to all members of the cryptosystem.

Then, if participant A wants to organize a secure connection with B, he generates a session key for some symmetric encryption algorithm as follows: A takes the public key of party B - g b and raises it to the power of his private key a. The result is the key ^ b . Obviously, the same key can be obtained by party B on the basis of knowledge of its own private key b and public key A - g ".
Opening the Diffie-Hellman algorithm is reduced to solving the problem of finding the logarithm in a discrete finite field, which is an NP-complete problem. The cryptographic strength of the algorithm essentially depends on the choice of the field order, its primitive element, as well as the size of the secret exponents.

Recently, much attention has been paid to Elliptic Curve Cryptography (ECC) algorithms based on the use of constructions called elliptic curves. These algorithms are based on the fact that for the equation ax = b with respect to x for known a and b and provided that a, b,x belong to an elliptic curve E. No other solution algorithm is known except for enumerating all possible values of x. Moreover, due to the complexity of the very construction of elliptic curves, even such simple methods of solving it as exhaustive search are difficult to evaluate from a computational point of view. Therefore, estimates of the reliability of ESS digital signature systems until recently were considered by specialists to be significantly less reasonable than similar estimates for the problem of factorization and discrete logarithm. Only in recent years has analyst confidence in elliptic curve constructions increased significantly.

According to modern estimates of complexity, at a level of reliability of ECC algorithms corresponding to the cryptographic strength of algorithms based on the problem of discrete logarithm with a key length of 512 bits, one can restrict oneself to an elliptic curve, the points of which are described by pairs of integers, each of which has a length of 160 bits. This makes it possible to reduce the total length of the secret key record from 512 to 320 bits, which in turn reduces the computational complexity (and hence the time) by up to 2.4 times. At the same time, in the case of ESS, the complexity of EDS verification is 36-480 times higher than when using RSA.

Thus, elliptical algorithms are of particular interest for applications related to microprocessor cards, when it is necessary to "unload" the microprocessor of the card during signing operations, and also to use a smaller memory size for storing the key.

In Russia, the following standards have been adopted: GOST R 34.10-2001 "Processes for the formation and verification of electronic digital signatures" and GOST R 34.11-94 "Information technology. Cryptographic information protection. Hash functions ". The cryptographic security of the Russian standard for electronic digital signatures is based on the complexity of solving the discrete logarithm problem on the set of points of an elliptic curve defined over a finite simple field GF (p), where p> 2 255.

The elliptic curve E in the Russian standard GOST R 34.10-2001 is determined by the relation y = x 3 + ax + b (mod p), where x, y, a, b e F p and 4a 3 + 27b 2 is not comparable with 0 mod p ... All operations in the Russian standard are performed in a cyclic subgroup of the group of points of an elliptic curve E, the order of which q satisfies the inequalities 2 254 < q < 2 256 . Taking into account the property of the hash function defined in GOST R 34.11-94, the length of the signature in the Russian standard turns out to be 512 bits.

In accordance with the GOST R 34.10-2001 standard, after selecting the parameters of the elliptic curve E, a secret 0 < d is generated . <q and the point of the curve Q = dP (public key) is determined, where P is the generating element of the cyclic subgroup. To create a message signature t, do the following:
  • 1. An integer random number k is generated such that 0 < d <q.
  • 2. The point of the elliptic curve C = kP is calculated and r = x c (mod q) is found, where x c is the abscissa of point C. If r - 0, it is necessary to return to step 1.
  • 3. Calculate e = h (m), where / g (x) is the hash function defined in the GOST R 34.11-94 standard.
  • 4. The number s = (rd + / ce) (mod q), 0 <s <q, is calculated . If s = 0, go back to step 1. The signature for message t is the binary representation of the pair (z, $).
To verify the signature (r, s) of the received message m } , the following actions are performed:
  • 1. Calculate e g = htmj (mod q). If = 0, then put e } = 1.
  • 2. Calculate v = e ^ 1 (mod q).
  • 3. Calculate Zj = sv (mod q) and z 2 - -rv (mod q).
  • 4. Determine the point of the elliptic curve C = z x P + z 2 Q and R = x Ci (mod q), where x C] is the abscissa of the point C P
  • 5. In case R = r, the signature is correct, otherwise the signature is incorrect.
The cryptographic strength of a Russian digital signature is determined by the strength of the hash function and the strength of the encryption algorithm itself. The probability of breaking the hash function defined in GOST 34.11-94 is 1.73 • IO '77 when picking a collision on a fixed message (irreversibility of the hash function) and 2.94 • 10 ~ 39 when picking up any collision (freedom of the hash function from collisions).

The strength of the encryption algorithm is based on discrete logarithm in a group of points on an elliptic curve. At the moment, some of the fastest algorithms for taking logarithms with the right choice of parameters are the p-method and 1-Pollard's method. For the optimized Pollard p-method, the computational complexity is estimated as o (^ / q). As a result, to ensure the cryptographic strength of 1O 30 operations, it is necessary to use a 256-bit value of the order of the cyclic subgroup q.

Let us conclude the review of asymmetric encryption algorithms with a list of the most frequently used algorithms for generating message digests (hash codes).
MD2, MD4 and MD5 are hashing algorithms developed by Ron Rivest. Each of them generates a 128-bit hash code. MD2 is the slowest, MD4 is the fastest. The MD5 algorithm can be considered a modification of MD4, in which speed was sacrificed for the sake of increasing reliability.

SHA-1 (Secure Hash Algorithm) is a message digest algorithm that generates a 160-bit hash code. The SHA-1 algorithm is approved by the US government (as part of the Capstone project). It is more reliable than MDx algorithms because it generates a longer hash code, which reduces the likelihood that different input sequences will be converted to the same hash code value.
In the Russian standard GOST R 34.11-94, the digest length is 256 bits.

4. Methods of assessment cryptographic strength
Let us now dwell on the assessment of the security of the used cryptographic algorithms. The estimation will be made on the basis of the following model.
According to Moore's Law, the computing performance of processors doubles every 18 months, or, which is the same, about 100 times every 10 years. In 2005, a typical personal computer connected to the Internet had a performance of about 1000 MIPS (1 MIPS refers to the performance of the ancient DEC VAX11 / 780 computer, from which this method of measuring the performance of a computing system comes from). Consequently, the average PC performance in 2010 will be 10,000 MIPS.

Estimates of the number of computers in the world show that in 2005 there were 300 million. The growth in the number of computers is determined by the law, according to which a 10-fold increase in their number occurs in 10 years. In other words, in 2010 the number of computers will be close to 1 billion.

According to experts, approximately 0.3% of all computers connected to the Internet can be involved in a crypto attack (such attacks belong to the Open Project class and are more aimed at public opinion than are really aimed at compromising a secret key). It is assumed that in the future, the share of computers that can be involved in a crypto attack will be 0.1%.
In addition to attacks of the Open Project class, there are attacks belonging to the Covert Project class, the essence of which is that underutilized cycles of corporate computing systems are used to carry out attacks. For example, the computing power of Sun Microsystems alone in 2000 was 100,000 MIPS. Thus, already in 2010, the computing power intended for a Covert Attack will amount to 100 million MIPS.

It is assumed that a reasonable time allotted for a crypto attack is 1 year.
The table below shows the available computing power expressed in MY (1MY = MIPS x 1 year).

Tab. IN 2. Estimating the computing power available to carry out a crypto attack
YearCovert AttackOpen Project
20053 • 10 63 • 10 8
2010u 8IO 10
202010 1110 13
Table OT provides data on the cryptographic strength of symmetric encryption algorithms and the RSA algorithm.

Tab. OT. Cryptographic strength of encryption algorithms
Secret key length, bitRSA key modulus length, bitCrypto resistance,
MY
5638410 2
645123 • 10 4
807682 • 10 8
9010243 • 10 11
98128010 14
10615363 • 10 16
11217922 • 10 18
12020483 • S 20
128230410 23
The table shows that for 2010 symmetric algorithms with a key length of at least 90 bits and an RSA algorithm with a modulus length of at least 1024 bits are stable.
The fact that the computing power that can be used for a cryptographic attack has grown 1000 times over 10 years means that the minimum size of the symmetric key and asymmetric key must be increased by about 10 and 20 bits, respectively, over the same period of time.

5. Key management systems
With the advent of a large number of cryptosystems based on the use of encryption standards accepted in the world, a new no less important problem has arisen related to the fact that for the exchange of encrypted communications between two participants of the cryptosystem, it is necessary that both participants in the exchange have been delivered carefully carefully kept secret keys in advance. to encrypt and decrypt messages.

This problem becomes more complex the more users want to exchange encrypted messages with each other. So, for a network of N users, it is necessary to have N • (N- 1) / 2 different keys simultaneously in action . Then, even for N = 1000, the number of required keys will be close to half a million. Since, for security reasons, secret keys for encryption must be changed as often as possible, making, packing and sending them with reliable couriers from some absolutely reliable center, as is customary in existing "closed communication" systems, becomes an almost impossible task.

An attempt to solve the problem of key distribution using traditional methods adopted back in the 80s usually leads to such a number of violations by users of the requirements of regulatory services that almost all information protection in such systems, at best, turns out to be a waste of money.

A solution to the key distribution problem was found using Public Key Infrastructure (PKI) technology. The essence of this technology is that users independently and independently generate their individual keys, which are kept secret from everyone on their individual carrier (special microprocessor card, Touch Memory non-volatile memory tablet, secure flash memory device, etc.). Then, each user from his individual key calculates, using a well-known procedure, his so-called "public key", that is, a block of information that he makes publicly available to everyone with whom he would like to exchange confidential messages.

Users can exchange public keys with each other immediately prior to the transmission of encrypted messages. Another, organizationally simpler, alternative is to have a third party collect all of the user's public keys into a single directory. The directory administrator must authenticate the public keys of the users with his signature and distribute this directory to all other participants in the exchange. Today, public key administration services are referred to as certification authorities (CAs).
The X.500 protocol can be used as a standard for a single key catalog.

Public keys certified by a CA are called certificates. A public key certificate is an object that associates a user with their key. The certificate is used to verify the digital signature.
Typically, certificates are stored as Unified Directory Service Objects on dedicated servers. If the key is compromised or the data of the certificate itself is changed, the certificates must be revoked. To do this, they are entered into a Certificate Revocation List (CRL) maintained by the CA.

ANSI X.509 specifications are widely used as a standard for describing the format of public key certificates. Most of the well-known open protocols that protect data from unauthorized access, for example, SET, SSL, HTTPS, etc., use X.509 certificates. Today, the third version of the X.509 standard is used.

The main task of a certificate is to establish a correspondence between a user and his public key. The X.509 standard certificate fields include:
  • - version number of the X.509 standard;
  • - certificate number;
  • - identifier of the EDS algorithm;
  • - the identifier of the certification service that issued the certificate;
  • - certificate holder identifier;
  • - certificate validity period;
  • - the public key to be certified;
  • - the signature of the certificate made by the CA.
The world's most widespread technology of open distribution of keys for encrypting confidential messages has received in corporate telecommunication networks and public networks for the exchange of electronic data, primarily on the Internet. American programmer Philip Zimmerman even wrote a publicly available email messaging package called PGP (Pretty Good Privacy).
The PGP package has successfully combined the possibilities of encrypting messages with symmetric block algorithms, distributing symmetric keys using the asymmetric RSA encryption algorithm, and creating electronic signatures of messages. Even people who have never met before, the PGP package provides a convenient means of exchanging information.
Besides PGP, there are other key management systems. The Kerberos protocol, developed at the Massachusetts Institute of Technology, is quite popular. The protocol implements several functions. One of them is storing private keys in a secure database. The key is known only to Kerberos and its owner. Another function is a trusted intermediary between two subscribers who wish to exchange secret keys. Kerberos also provides user authentication and key distribution services.

The protocol SKIP (Simple Key Management for Internet Protocols) is also known. It is a key management protocol developed by SUN Microsystems. SKIP is easy to implement. It describes how to compute a key based on public key certificates. However, the use of SKIP imposes certain restrictions on the choice of encryption and hashing algorithms. SKIP is stated as an optional component of the Internet Protocol Security (IPSec) specification.

In addition to the listed protocols, the Diffie-Hellman and KEA (Key Exchange Algorithm) algorithms are used for key management.
PKI infrastructures are currently implemented on the basis of specialized products such as RSA KEON (RSA Security), UniCERT (Baltimore), Notary-PRO (Signal-KOM), CryptoPRO CSP (Crypto-PRO), etc.
 
Top