Abstract

The goal of cryptography is to make it impossible to take a cipher and reproduce the original plain text without the corresponding key. With good cryptography, your messages are encrypted in such a way that brute force attacks against the algorithm or the key are all but impossible. Good cryptography gets its security by using incredibly long keys and using encryption algorithms that are resistant to other form attack. The neural net application represents a way of the next development in good cryptography. This paper deals with using neural network in cryptography, e.g. designing such neural network that would be practically used in the area of cryptography. This paper also includes an experimental demonstration. INTRODUCTION TO CRYPTOGRAPHY The cryptography deals with building such systems of security of news that secure any from reading of trespasser. Systems of data privacy are called the cipher systems. The file of rules are made for encryption of every news is called the cipher key. Encryption is a process, in which we transform the open text, e.g. message to cipher text according to rules. Cryptanalysis of the news is the inverse process, in which the receiver of the cipher transforms it to the original text. The cipher key must have several heavy attributes. The best one is the singularity of encryption and cryptanalysis. The open text is usually composed of international alphabet characters, digits and punctuation marks. The cipher text has the same composition as the open text. Very often we find only characters of international alphabet or only digits. The reason for it is the easier transport per media. The next cipher systems are the matter of the historical sequence: transposition ciphers, substitution ciphers, cipher tables and codes. Simultaneously with secrecy of information the tendency for reading the cipher news without knowing the cipher key was evolved. Cipher keys were watched very closely. The main goal of cryptology is to guess the cipher news and to reconstruct the used keys with the help of good analysis of cipher news. It makes use of mathematical statistics, algebra, mathematical linguistics, etc., as well as known mistakes made by ciphers too. The legality of the open text and the applied cipher key are reflected in every cipher system. Improving the cipher key helps to decrease this legality. The safety of the cipher system lies in its immunity against the decipher. The goal of cryptanalysis is to make it possible to take a cipher text and reproduce the original plain text without the corresponding key. Two major techniques used in encryption are symmetric and asymmetric encryption. In symmetric encryption, two parties share a single encryption-decryption key (Khaled, Noaman, Jalab 2005). The sender encrypts the original message (P), which is referred to as plain text, using a key (K) to generate apparently random nonsense, referred to as cipher text (C), i.e.: C = Encrypt (K,P) (1) Once the cipher text is produced, it may be transmitted. Upon receipt, the cipher text can be transformed back to the original plain text by using a decryption algorithm and the same key that was used for encryption, which can be expressed as follows: P = Dencrypt (K,C) (2) In asymmetric encryption, two keys are used, one key for encryption and another key for decryption. The length of cryptographic key is almost always measured in bits. The more bits that a particular cryptographic algorithm allows in the key, the more keys are possible and the more secure the algorithm becomes. The following key size recommendations should be considered when reviewing protection (Ferguson, Schneier, Kohno, 2010): Symmetric key: • Key sizes of 128 bits (standard for SSL) are sufficient for most applications Proceedings 26th European Conference on Modelling and Simulation ©ECMS Klaus G. Troitzsch, Michael Mohring, Ulf Lotzmann (Editors) ISBN: 978-0-9564944-4-3 / ISBN: 978-0-9564944-5-0 (CD) • Consider 168 or 256 bits for secure systems such as large financial transactions Asymmetric key: • Key sizes of 1280 bits are sufficient for most personal applications • 1536 bits should be acceptable today for most secure applications • 2048 bits should be considered for highly protected applications. Hashes: • Hash sizes of 128 bits (standard for SSL) are sufficient for most applications • Consider 168 or 256 bits for secure systems, as many hash functions are currently being revised (see above). NIST and other standards bodies will provide up to date guidance on suggested key sizes. BACKPROPAGATION NEURAL NETWORKS An Artificial Neural Network (ANN) is an information processing paradigm that is inspired by the way biological nervous systems, such as the brain, process information. The key element of this paradigm is the structure of the information processing system. It is composed of a large number of highly interconnected processing elements (neurones) working in unison to solve specific problems. ANNs, like people, learn by example. An ANN is configured for a specific application, such as pattern recognition or data classification, through a learning process. Learning in biological systems involves adjustments to the synaptic connections that exist between the neurones. This is true of ANNs as well. Figures 1: A general three layer neural network Backpropagation network is one of the most complex neural networks for supervised learning. Regarding topology, the network belongs to a multilayer feedforward neural network. See Fig. 1 (Volna 2000), usually a fully connected variant is used, so that each neuron from the n-th layer is connected to all neurons in the (n+1)-th layer, but it is not necessary and in general some connections may be missing – see dashed lines, however, there are no connections between neurons of the same layer. A subset of input units has no input connections from other units; their states are fixed by the problem. Another subset of units is designated as output units; their states are considered the result of the computation. Units that are neither input nor output are known as hidden units. Figures 2: A simple artificial neuron (http://encefalus.com/neurology-biology/neuralnetworks-real-neurons) A basic computational element is often called a neuron (Fig. 2), node or unit (Fausett 1994). It receives input from some other units, or perhaps from an external source. Each input has an associated weight w, which can be modified so as to model synaptic learning. The unit computes some function f of the weighted sum of its inputs (3):

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call