Futuristic technology visualization with abstract digital elements
Scr Electronics,Sure Electronics,Popular Electronics Blog - electricalwin.com
Artificial Intelligence

Homomorphic Encrypted Neural Network Based on Numpy

In a distributed AI environment, a homomorphic encryption neural network helps protect the intellectual property and consumer privacy of commercial companies. Let's explore how to implement a homomorphic encryption neural network using Numpy, as explained by Andrew Trask, a data scientist at DeepMind and a deep learning tutor at Udacity. TLDR: In this article, we will train a neural network that is fully encrypted during the training phase (training on unencrypted data). The resulting neural network will have two beneficial properties. First, it protects the intelligence of the neural network from theft, allowing valuable AI to be trained in an unsafe environment without the risk of being stolen. Second, the network can only perform cryptographic predictions (which may not impact the outside world, as without the key, the outside world cannot understand the prediction). This creates a powerful imbalance between users and hyper-smart systems. If the AI is homomorphic, then in the AI's view, the entire external world is also homomorphic. A human controlling a key can choose to unlock the AI itself (release the AI into the world) or just decrypt the single prediction made by the AI (which looks safer). Super Smart Many people are concerned that superintelligence might one day decide to harm humans. Stephen Hawking has called for a new world government to manage our ability to give artificial intelligence, in case AI eventually destroys humans. These are bold claims, reflecting the general concerns of the scientific community and the world at large. This article introduces potential technical solutions to this problem, with some toy sample code to demonstrate the approach. The goal is simple. We want to create AI technology that becomes very smart in the future (capable of solving problems like curing cancer and ending hunger), but such intelligence is controlled by humans (key-based), so its application is restricted. Unrestricted learning is great, but unrestricted application of knowledge can be potentially dangerous. To introduce this idea, let me briefly introduce two exciting areas of research: deep learning and homomorphic encryption. 1. What is deep learning? Deep learning is a suite of tools for automated intelligence, primarily based on neural networks. This area of computer science has driven recent breakthroughs in AI technology, as deep learning surpasses previous performance records on many intelligent tasks. For example, it is a major component of DeepMind's AlphaGo system. Neural networks make predictions based on input. They learn through trial and error, initially making random predictions and then receiving an "error signal" indicating whether their prediction is too high or too low. After repeating this cycle millions of times, the network begins to understand the patterns. 2. What is homomorphic encryption? As the name suggests, homomorphic encryption is a form of encryption. In an asymmetric scenario, it can take plaintext and turn it into garbled text based on a "public key." More importantly, it can convert the garbled text back to the original text using a "private key." However, without the "private key," (in theory) you cannot decode the encrypted text. Homomorphic encryption is a special type of encryption that allows someone to modify encrypted information in a specific way without being able to read it. For example, homomorphic encryption can be applied to numbers so that encrypted numbers can be added and multiplied without decryption. Below are some examples. There are now more and more homomorphic encryption schemes, each with different properties. This is a relatively young field, and there are still some obvious issues to be resolved, but we'll leave these for later discussion. For now, let's start with an integer public key encryption scheme. This scheme is homomorphic on multiplication and addition, allowing operations similar to those shown in the figure. Not only that, because the public key allows "one-way" encryption, you can even perform operations between unencrypted numbers and encrypted numbers (by one-way encryption). The 2 * Cipher A in the above figure is an example. (Some encryption schemes don't even require this... but the same... we'll discuss this later.) 3. Can we combine the two? Perhaps the most frequent interaction between deep learning and homomorphic encryption is reflected in data privacy. When you homomorphically encrypt data, you can't read the data but still maintain most interesting statistical structures. This allows people to train models (CryptoNets) on encrypted data. There is even a startup hedge fund called Numer.ai that encrypts expensive proprietary data, allowing anyone to try to train a machine learning model to predict the stock market. This is usually not possible because it leads to the abandonment of extremely expensive information (it is not possible to train the model based on the usual encrypted data). However, this article will do the opposite, encrypting the neural network, and then training on the decrypted information. The so-called complex neural network can actually be divided into few (rarely amazing) components that are repeated to form a neural network. In fact, based on the following operations, you can create many of the most advanced neural networks: Addition Multiplication Division Subtraction Sigmoid Tanh Exponential function So let us raise this obvious technical question: can we homomorphize the neural network itself? Will we want to do this? It turns out that this can be done based on some conservative approximations. Addition - out of the box Multiplication - out of the box Division - out of the box? Just the reciprocal of multiplication Addition - out of the box? Just add negative numbers Sigmoid - um... maybe a little difficult Tanh - um... maybe a little difficult Exponential function - um... maybe a little difficult It seems that implementing division and subtraction is quite trivial, but those more complex functions are... well... more complicated than simple addition and multiplication. In order to try to homomorphize a deep neural network, we also need a secret material. Fourth, Taylor series expansion Perhaps you learned in elementary school, Taylor series allows us to calculate a complex (nonlinear) function using infinite addition, subtraction, multiplication, and division. This is perfect! (except for the infinite part). Fortunately, if you stop calculating the exact Taylor series expansion early, you can still get an approximation of the function at hand. Below is an example (source) of approximating some popular functions by Taylor series. Wait! There is an index here! Do not worry. The index is simply repeated multiplication. The following is a Python implementation that uses the Taylor series to approximate the sigmoid function (see Wolfram Alpha for related formulas). We will pick the first few of the series and see how close we can get. Import numpy as np Def sigmoid_exact(x): Return 1 / (1 + np.exp(-x)) # Using Taylor series Def sigmoid_approximation(x): Return (1 / 2) + (x / 4) - (x**3 / 48) + (x**5 / 480) For lil_number in [0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1.0]: Print("Enter:" + str(lil_number)) Print("Exact Sigmoid value:" + str(sigmoid_exact(lil_number))) Print("Approach Sigmoid:" + str(sigmoid_approximation(lil_number))) Result: Input: 0.1 Accurate Sigmoid value: 0.52497918747894 Approaching Sigmoid: 0.5249791874999999 Input: 0.2 Accurate Sigmoid value: 0.549833997312478 Approaching Sigmoid: 0.549834 Input: 0.3 Accurate Sigmoid value: 0.574442516811659 Approaching Sigmoid: 0.5744425624999999 Input: 0.4 Accurate Sigmoid value: 0.598687660112452 Approaching Sigmoid: 0.598688 Input: 0.5 Accurate Sigmoid value: 0.6224593312018546 Approaching Sigmoid: 0.6224609375000001 Input: 0.6 Accurate Sigmoid value: 0.6456563062257954 Approaching Sigmoid: 0.6456620000000001 Input: 0.7 Accurate Sigmoid value: 0.6681877721681662 Approaching Sigmoid: 0.6682043125000001 Input: 0.8 Accurate Sigmoid value: 0.6899744811276125 Approaching Sigmoid: 0.690016 Input: 0.9 Accurate Sigmoid value: 0.7109495026250039 Approaching Sigmoid: 0.7110426875 Input: 1.0 Accurate Sigmoid value: 0.7310585786300049 Approaching Sigmoid: 0.73125 Using only the first 4 terms of the Taylor series, we are quite close to the sigmoid function. Now that we have a common strategy, it's time to choose a homomorphic encryption algorithm. Fifth, choose the encryption algorithm Homomorphic encryption is a relatively new field, and the main milestone is the first fully homomorphic encryption algorithm discovered by Craig Gentry in 2009. This milestone has established a stronghold for many latecomers. Most of the exciting research on homomorphic encryption revolves around the development of Turing's complete homomorphic encryption computer. Therefore, the need for a fully homomorphic encryption scheme has led to an attempt to find an algorithm that allows multiple logic gates for arbitrary computations to be efficiently and safely computed using this algorithm. The general hope is that people can safely put their work in the cloud without the risk of data sent to the cloud being read by someone other than the sender. This is a very cool idea and has made a lot of progress. However, there are some shortcomings in this perspective. In general, most homomorphic encryption schemes are slower than normal computers (currently not practical). This has inspired a series of interesting studies that limit the types of operations to some degree of homomorphism so that at least some operations can be performed. Not so flexible, but faster, this is a common computational compromise. This is where we want to start looking. In theory, we want a homomorphic encryption scheme that operates on floating-point numbers (but we'll see shortly, we end up with a scheme to manipulate integers) instead of a binary value scheme. Binary works, but it not only requires the flexibility of fully homomorphic encryption (at the expense of performance), but also requires us to manage the logic between the binary representation and the mathematical operations we want to compute. A less powerful, custom-made HE (HE is the abbreviation for homomorphic Homomorphic Encryption) algorithm for floating-point operations would be more appropriate. Despite this limitation, there are still many options. Here are some popular algorithms that have the features we need: Efficient Homomorphic Encryption on Integer Vectors and Its Applications (Efficient Homomorphic Encryption Based on Integer Vectors and Its Applications) Yet Another Somewhat Homomorphic Encryption (YASHE) (again to some extent homomorphic encryption) Somewhat Practical Fully Homomorphic Encryption (FV) (some degree of practical full homomorphic encryption) Fully Homomorphic Encryption without Bootstrapping (non-bootstrap full homomorphic encryption) The best choice might be YASHE or FV. YASHE is an algorithm used by popular CryptoNet, and support for floating-point operations is great. However, it is quite complicated. To make this article easy to read and easy to experiment, we'll choose Efficient Integer Vector Homomorphic Encryption that is slightly less advanced (and probably less secure). However, I think it's worth pointing out that as you read this article, more new HE algorithms are being developed, and the ideas presented in this article are common to any homomorphic encryption of additions and multiplications of integers or floating-point numbers. Program. Even, my wish is to draw attention to this application of HE, so that more HE algorithms optimized for deep learning can be developed. Yu, Lai, and Paylor's paper Efficient Integer Vector Homomorphic Encryption describes this algorithm in detail, and the corresponding implementation can be obtained on GitHub (jamespayor/vector-homomorphic-encryption). The main part is in the C++ file vhe.cpp. Below we guide the reader to a python port of the code to explain what the code does. This is also helpful if you choose to implement a more advanced solution, as there are some topics that are relatively generic (general function names, variable names, etc.). Sixth, homomorphic encryption in Python The first is some homomorphic cryptographic terms: Plaintext Unencrypted data. Also called "message". In our case, this will be some number representing the neural network. Ciphertext Encrypts data. We will perform mathematical operations on top of the ciphertext, which will change the underlying plaintext. Public key A sequence of pseudo-random numbers that allows anyone to encrypt data. It can be shared with others because (theoretically) the public key can only be used for encryption. Private/secret key A pseudo-random number sequence that allows you to decrypt data encrypted by a public key. You don't want to share the private key with others. Otherwise, someone else can decrypt your message. Corresponding variable names (different homomorphic encryption techniques tend to use these standard variable names): S represents a matrix of keys/private keys. Used for decryption. M public key. Used for encryption and mathematical operations. In some algorithms, not all mathematical operations require a public key. But this algorithm uses the public key very widely. c Encrypted data vector, ciphertext. x message, plain text. Some papers use m as the variable name for the plaintext. w A single "weighting" scalar variable used to reweight the input message x (making it consistently longer or shorter). This variable is used to adjust the signal to noise ratio. After the signal is boosted, the message is less susceptible to noise for a given operation. However, too much signal enhancement will increase the probability of completely destroying the data. This is a balance. E or e generally refers to random noise. In some cases, the noise added before the data is encrypted with the public key. In general, noise makes decryption more difficult. Noise makes the two encryptions of the same message different, which is important in making the message difficult to crack. Note that depending on the algorithm and implementation, this may be a vector or a matrix. In other cases, it refers to the noise accumulated with the operation, as described later. As with the idioms of many mathematical papers, uppercase letters correspond to matrices, lowercase letters correspond to vectors, and italicized lowercase corresponds to scalars. We focus on four operations of homomorphic encryption: public-private key pair generation, one-way encryption, decryption, and mathematical operations. Let's start with decryption. The formula on the left describes the general relationship between the key S and the message x. The formula on the right shows how to decrypt a message using a key. I don't know if you noticed it, the formula on the right doesn't contain e. Basically, homomorphic encryption generally introduces enough noise to make it difficult to crack out the original message without a key, but the amount of noise introduced is small enough that the noise can be ignored by rounding when you do have a key. The box in the formula on the right indicates "round to the nearest integer." Other homomorphic encryption algorithms use different rounding. Analog-to-digital operations are almost universal. Encryption is generated to make the above relationship true. c. If S is a random matrix, then c is difficult to decrypt. A simple, asymmetric way to generate a key is to find the inverse of the key. Let's take a look at the corresponding Python code. Import numpy as np Def generate_key(w,m,n): S = (np.random.rand(m,n) * w / (2 ** 16)) # can prove max(S) < w Return S Def encrypt(x,S,m,n,w): Assert len(x) == len(S) e = (np.random.rand(m)) # can prove max(e) < w / 2 c = np.linalg.inv(S).dot((w * x) + e) Return c Def decrypt(c,S,w): Return (S.dot(c) / w).astype('int') x = np.array([0,1,2,5]) m = len(x) n = m w = 16 S = generate_key(w,m,n) You can try running the above code in Jupyter Notebook and do some things: Note that we can perform some basic operations on the ciphertext, which change the corresponding plaintext. Very elegant, isn't it? Seven, optimize encryption Important lesson: Review the previous formula. If the key S is an identity matrix, then c is just a heavily weighted, slightly noisy version of the input x. If you don't understand the above, please ask Google "Unit Matrix Tutorial." Due to space limitations, the unit matrix is not described in detail here. This leads us to think about how encryption works. The author of the paper did not explicitly assign a pair of independent "public keys" and "private keys." Instead, a "key exchange" technique was proposed, replacing the private key S with S'. More specifically, this private key exchange technique involves generating a matrix M that can perform the transformation. Since M has the ability to convert a message from an unencrypted state (unit matrix key) to an encrypted state (a random and hard-to-guess key), this M-matrix can be used as our public key! The above paragraph contains a lot of information, and we may have spoken too fast. Let us restate it. what happened…… Based on the above two formulas, if the key is an identity matrix, the message is unencrypted. Based on the above two formulas, if the key is a random matrix, the message is encrypted. We construct a matrix M to convert one key to another. When matrix M converts the identity matrix into a random key, it encrypts the message using one-way encryption by definition. Since M acts as a "one-way encryption", we call it a "public key" and can distribute it like a public key because it cannot be used for decryption. Ok, no more delays, let's see how this is done in Python. Import numpy as np Def generate_key(w,m,n): S = (np.random.rand(m,n) * w / (2 ** 16)) # can prove max(S) < w Return S Def encrypt(x,S,m,n,w): Assert len(x) == len(S) e = (np.random.rand(m)) # can prove max(e) < w / 2 c = np.linalg.inv(S).dot((w * x) + e) Return c Def decrypt(c,S,w): Return (S.dot(c) / w).astype('int') Def get_c_star(c,m,l): C_star = np.zeros(l * m,dtype='int') For i in range(m): b = np.array(list(np.binary_repr(np.abs(c[i]))),dtype='int') If(c[i] < 0): b *= -1 C_star[(i * l) + (l-len(b)): (i+1) * l] += b Return c_star Def switch_key(c,S,m,n,T): l = int(np.ceil(np.log2(np.max(np.abs(c)))))) C_star = get_c_star(c,m,l) S_star = get_S_star(S,m,n,l) N_prime = n + 1 S_prime = np.concatenate((np.eye(m), TT), 0).T A = (np.random.rand(n_prime - m, n*l) * 10).astype('int') E = (1 * np.random.rand(S_star.shape[0],S_star.shape[1])).astype('int') M = np.concatenate(((S_star - T.dot(A) + E), A), 0) C_prime = M.dot(c_star) Return c_prime, S_prime Def get_S_star(S,m,n,l): S_star = list() For i in range(l): S_star.append(S*2**(li-1)) S_star = np.array(S_star).transpose(1,2,0).reshape(m,n*l) Return S_star Def get_T(n): N_prime = n + 1 T = (10 * np.random.rand(n,n_prime - n)).astype('int') Return T Def encrypt_via_switch(x,w,m,n,T): c,S = switch_key(x*w,np.eye(m),m,n,T) Return c,S x = np.array([0,1,2,5]) m = len(x) n = m w = 16 S = generate_key(w,m,n) The basic idea of the above code is to make S roughly an identity matrix and then connect a random vector T above it. So T has all the information needed for the key, but we still need to build a matrix of size S so that everything works. Eight, create an XOR neural network Now that we know how to encrypt and decrypt messages (and perform basic addition and multiplication calculations), it's time to try to extend the rest of the operations to build a simple XOR neural network. Although technically, neural networks are just a series of very simple operations, we still need some combination of operations to achieve convenient functions. Below I will describe each of the operations we need, and how we implement them at a higher level of abstraction (basically the sequence of additions and multiplications we will use). Then I will show you the code. For some details, please refer to the paper mentioned above. Floating point numbers We will simply scale the floating point numbers to integers. We will train our network on integers (taking integers as floating point numbers). For example, suppose scale=1000, 0.2 * 0.5 = 0.1 is 200 * 500 = 100000. When restoring, 100000 / (1000 * 1000) = 0.1 (because we used multiplication, we need to divide by 1000 squared). At first glance this is very tricky, but you will adapt. Since we use the HE scheme to round to the nearest integer, this also allows us to control the accuracy of the neural network. Vector Matrix Multiplication This is our butter bread (the most basic operation). In fact, the matrix M of the conversion key is a way of linear transformation. Inner product In the appropriate context, the above linear transformation may be an inner product. Sigmoid Since we can perform vector matrix multiplication, we can calculate the value of an arbitrary polynomial based on sufficient multiplication. Since we already know the Taylor series polynomial for sigmoid, we can calculate the approximation of sigmoid! Element-by-element matrix multiplication This operation is surprisingly inefficient. We need to do vector matrix multiplication or a series of inner product operations. Outer product We can do this by mask and inner product. As a statement, there may be a more efficient way to accomplish these operations, but I don't want to risk breaking the integrity of the homomorphic encryption scheme. So to some extent, I used the functions provided in the paper to reverse the implementation of the above operations (except for the sigmoid extension allowed by the algorithm). Now let's take a look at the Python code that does this: Def sigmoid(layer_2_c): Out_rows = list() For position in range(len(layer_2_c)-1): M_position = M_onehot[len(layer_2_c)-2][0] Layer_2_index_c = innerProd(layer_2_c,v_onehot[len(layer_2_c)-2][position],M_position,l) / scaling_factor x = layer_2_index_c X2 = innerProd(x,x,M_position,l) / scaling_factor X3 = innerProd(x,x2,M_position,l) / scaling_factor X5 = innerProd(x3,x2,M_position,l) / scaling_factor X7 = innerProd(x5,x2,M_position,l) / scaling_factor Xs = copy.deepcopy(v_onehot[5][0]) Xs[1] = x[0] Xs[2] = x2[0] Xs[3] = x3[0] Xs[4] = x5[0] Xs[5] = x7[0] Out = mat_mul_forward(xs,H_sigmoid[0:1],scaling_factor) Out_rows.append(out) Return transpose(out_rows)[0] Def load_linear_transformation(syn0_text,scaling_factor = 1000): Syn0_text *= scaling_factor Return linearTransformClient(syn0_text.T, getSecretKey(T_keys[len(syn0_text)-1]), T_keys[len(syn0_text)-1], l) Def outer_product(x,y): Flip = False If(len(x) < len(y)): Flip = True Tmp = x x = y y = tmp Y_matrix = list() For i in range(len(x)-1): Y_matrix.append(y) Y_matrix_transpose = transpose(y_matrix) Outer_result = list() For i in range(len(x)-1): Outer_result.append(mat_mul_forward(x * onehot[len(x)-1][i],y_matrix_transpose,scaling_factor)) If(flip): Return transpose(outer_result) Return outer_result Def mat_mul_forward(layer_1,syn1,scaling_factor): Input_dim = len(layer_1) Output_dim = len(syn1) Buff = np.zeros(max(output_dim+1,input_dim+1)) Buff[0:len(layer_1)] = layer_1 Layer_1_c = buff Syn1_c = list() For i in range(len(syn1)): Buff = np.zeros(max(output_dim+1,input_dim+1)) Buff[0:len(syn1[i])] = syn1[i] Syn1_c.append(buff) Layer_2 = innerProd(syn1_c[0], layer_1_c, M_onehot[len(layer_1_c) - 2][0],l) / float(scaling_factor) For i in range(len(syn1)-1): Layer_2 += innerProd(syn1_c[i+1],layer_1_c,M_onehot[len(layer_1_c) - 2][i+1],l) / float(scaling_factor) Return layer_2[0:output_dim+1] Def elementwise_vector_mult(x,y,scaling_factor): y =[y] One_minus_layer_1 = transpose(y) Outer_result = list() For i in range(len(x)-1): Outer_result.append(mat_mul_forward(x * onehot[len(x)-1][i],y,scaling_factor)) Return transpose(outer_result)[0] One thing I didn't tell you before. To save time, I precalculated some keys, vectors, matrices, and sorted them. This includes vectors consisting entirely of 1 and one-hot code vectors of different lengths. This helps with the masking above and other simple operations we hope to do. For example, the derivative of sigmoid is sigmoid(x) * (1 - sigmoid(x)). Therefore, it is convenient to pre-calculate these variables. The following is the pre-calculation step. #在安全服务端 l = 100 w = 2 ** 25 aBound = 10 tBound = 10 eBound = 10 Max_dim = 10 Scaling_factor = 1000 #钥 T_keys = list() For i in range(max_dim): T_keys.append(np.random.rand(i+1,1)) # Unidirectional encryption transform M_keys = list() For i in range(max_dim): M_keys.append(innerProdClient(T_keys[i],l)) M_onehot = list() For h in range(max_dim): i = h+1 Buffered_eyes = list() For row in np.eye(i+1): Buffer = np.ones(i+1) Buffer[0:i+1] = row Buffered_eyes.append((M_keys[i-1].T * buffer).T) M_onehot.append(buffered_eyes) C_ones = list() For i in range(max_dim): C_ones.append(encrypt(T_keys[i],np.ones(i+1), w, l).astype('int')) V_onehot = list() Onehot = list() For i in range(max_dim): Eyes = list() Eyes_txt = list() For eye in np.eye(i+1): Eyes_txt.append(eye) Eyes.append(one_way_encrypt_vector(eye,scaling_factor)) V_onehot.append(eyes) Onehot.append(eyes_txt) H_sigmoid_txt = np.zeros((5,5)) H_sigmoid_txt[0][0] = 0.5 H_sigmoid_txt[0][1] = 0.25 H_sigmoid_txt[0][2] = -1/48.0 H_sigmoid_txt[0][3] = 1/480.0 H_sigmoid_txt[0][4] = -17/80640.0 H_sigmoid = list() For row in H_sigmoid_txt: H_sigmoid.append(one_way_encrypt_vector(row)) If you look closely at the code above, you'll notice that the H_sigmoid matrix is the matrix we need to calculate the sigmoid polynomial. Finally, we use the following code to train our neural network. If you don't understand the part of the neural network, you can review the neural network based on Numpy: Back Propagation. I basically used the XOR network in the article to replace some of these operations with appropriate tool functions to encrypt the weights. Np.random.seed(1234) Input_dataset = [[],[0],[1],[0,1]] Output_dataset = [[0],[1],[1],[0]] Input_dim = 3 Hidden_dim = 4 Output_dim = 1 Alpha = 0.015 # Use the public key to encrypt training data in one direction (can be performed in place) y = list() For i in range(4): Y.append(one_way_encrypt_vector(output_dataset[i],scaling_factor)) # Generate weights Syn0_t = (np.random.randn(input_dim,hidden_dim) * 0.2) - 0.1 Syn1_t = (np.random.randn(output_dim,hidden_dim) * 0.2) - 0.1 # One-way encryption weight Syn1 = list() For row insyn1_t: Syn1.append(one_way_encrypt_vector(row,scaling_factor).astype('int64')) Syn0 = list() For row insyn0_t: Syn0.append(one_way_encrypt_vector(row,scaling_factor).astype('int64')) #开始培训 For iter in range(1000): Decrypted_error = 0 Encrypted_error = 0 For row_i in range(4): If(row_i == 0): Layer_1 = sigmoid(syn0[0]) Elif(row_i == 1): Layer_1 = sigmoid((syn0[0] + syn0[1])/2.0) Elif(row_i == 2): Layer_1 = sigmoid((syn0[0] + syn0[2])/2.0) Else: Layer_1 = sigmoid((syn0[0] + syn0[1] + syn0[2])/3.0) Layer_2 = (innerProd(syn1[0],layer_1,M_onehot[len(layer_1) - 2][0],l) / float(scaling_factor))[0:2] Layer_2_delta = add_vectors(layer_2,-y[row_i]) Syn1_trans = transpose(syn1) One_minus_layer_1 = [(scaling_factor * c_ones[len(layer_1) - 2]) - layer_1] Sigmoid_delta = elementwise_vector_mult(layer_1,one_minus_layer_1[0],scaling_factor) Layer_1_delta_nosig = mat_mul_forward(layer_2_delta,syn1_trans,1).astype('int64') Layer_1_delta = elementwise_vector_mult(layer_1_delta_nosig,sigmoid_delta,scaling_factor) * alpha Syn1_delta = np.array(outer_product(layer_2_delta,layer_1)).astype('int64') Syn1[0] -= np.array(syn1_delta[0]* alpha).astype('int64') Syn0[0] -= (layer_1_delta).astype('int64') If(row_i == 1): Syn0[1] -= (layer_1_delta).astype('int64') Elif(row_i == 2): Syn0[2] -= (layer_1_delta).astype('int64') Elif(row_i == 3): Syn0[1] -= (layer_1_delta).astype('int64') Syn0[2] -= (layer_1_delta).astype('int64') # If there are security requirements, you can send the lost loss to another location for decryption. Encrypted_error += int(np.sum(np.abs(layer_2_delta)) / scaling_factor) Decrypted_error += np.sum(np.abs(s_decrypt(layer_2_delta).astype('float')/scaling_factor)) Sys.stdout.write("iteration" + str(iter) + "cryption loss:" + str(encrypted_error) + " decryption loss:" + str(decrypted_error) + " Alpha:" + str(alpha)) # Let the log look a little better If(iter % 10 == 0): Print() # Stop the training after the encryption error reaches a certain level If(encrypted_error < 25000000): Break Print("final prediction:") For row_i in range(4): If(row_i == 0): Layer_1 = sigmoid(syn0[0]) Elif(row_i == 1): Layer_1 = sigmoid((syn0[0] + syn0[1])/2.0) Elif(row_i == 2): Layer_1 = sigmoid((syn0[0] + syn0[2])/2.0) Else: Layer_1 = sigmoid((syn0[0] + syn0[1] + syn0[2])/3.0) Layer_2 = (innerProd(syn1[0],layer_1,M_onehot[len(layer_1) - 2][0],l) / float(scaling_factor))[0:2] Print("true prediction:" + str(output_dataset[row_i]) + " encryption prediction:" + str(layer_2) + " decryption prediction:" + str(s_decrypt(layer_2) / scaling_factor)) Iteration 0 encryption loss decryption loss: 2.529Alpha: 0.015 Iteration 10 encryption loss decryption loss: 2.071Alpha: 0.015 Iteration 20 encryption loss decryption loss: 1.907Alpha: 0.015 Iteration 30 encryption loss decryption loss: 1.858Alpha: 0.015 Iteration 40 encryption loss decryption loss: 1.843Alpha: 0.015 Iteration 50 encryption loss decryption loss: 1.829Alpha: 0.015 Iteration 60 encryption loss decryption loss: 1.811Alpha: 0.015 Iteration 70 encryption loss decryption loss: 1.797Alpha: 0.015 Iteration 80 encryption loss decryption loss: 1.786Alpha: 0.015 Iteration 90 encryption loss decryption loss: 1.778Alpha: 0.015 Iteration 100 encryption loss decryption loss: 1.769Alpha: 0.015 Iteration 110 encryption loss decryption loss: 1.763Alpha: 0.015 Iteration 120 encryption loss decryption loss: 1.757Alpha: 0.015 Iteration 130 encryption loss decryption loss: 1.75Alpha: 0.0155 Iteration 140 encryption loss decryption loss: 1.744Alpha: 0.015 Iteration 150 encryption loss decryption loss: 1.739Alpha: 0.015 Iteration 160 encryption loss decryption loss: 1.732Alpha: 0.015 Iteration 170 encryption loss decryption loss: 1.725Alpha: 0.015 Iteration 180 encryption loss decryption loss: 1.653Alpha: 0.015 Iteration 190 encryption loss decryption loss: 1.629Alpha: 0.015 Iteration 200 encryption loss decryption loss: 1.605Alpha: 0.015 Iteration 210 encryption loss decryption loss: 1.541Alpha: 0.015 Iteration 220 encryption loss decryption loss: 1.621Alpha: 0.015 Iteration 230 Encryption Loss Decryption Loss: 1.638Alpha: 0.015 Iteration 240 encryption loss decryption loss: 1.6

DC Power Connector

Dc Power Connector,Dc Charging Adapter Tips Kit,Power Plug Connector Dc,2-Pin Male Power Plug Connector

Changzhou Kingsun New Energy Technology Co., Ltd. , https://www.aioconn.com