Literature DB >> 30309349

Logistic regression model training based on the approximate homomorphic encryption.

Andrey Kim1, Yongsoo Song2, Miran Kim3, Keewoo Lee1, Jung Hee Cheon1.   

Abstract

BACKGROUND: Security concerns have been raised since big data became a prominent tool in data analysis. For instance, many machine learning algorithms aim to generate prediction models using training data which contain sensitive information about individuals. Cryptography community is considering secure computation as a solution for privacy protection. In particular, practical requirements have triggered research on the efficiency of cryptographic primitives.
METHODS: This paper presents a method to train a logistic regression model without information leakage. We apply the homomorphic encryption scheme of Cheon et al. (ASIACRYPT 2017) for an efficient arithmetic over real numbers, and devise a new encoding method to reduce storage of encrypted database. In addition, we adapt Nesterov's accelerated gradient method to reduce the number of iterations as well as the computational cost while maintaining the quality of an output classifier.
RESULTS: Our method shows a state-of-the-art performance of homomorphic encryption system in a real-world application. The submission based on this work was selected as the best solution of Track 3 at iDASH privacy and security competition 2017. For example, it took about six minutes to obtain a logistic regression model given the dataset consisting of 1579 samples, each of which has 18 features with a binary outcome variable.
CONCLUSIONS: We present a practical solution for outsourcing analysis tools such as logistic regression analysis while preserving the data confidentiality.

Entities:  

Keywords:  Homomorphic encryption; Logistic regression; Machine learning

Mesh:

Year:  2018        PMID: 30309349      PMCID: PMC6180367          DOI: 10.1186/s12920-018-0401-7

Source DB:  PubMed          Journal:  BMC Med Genomics        ISSN: 1755-8794            Impact factor:   3.063


Background

Machine learning (ML) is a class of methods in artificial intelligence, the characteristic feature of which is that they do not give the solution of a particular problem but they learn the process of finding solutions to a set of similar problems. The theory of ML appeared in the early 60’s on the basis of the achievements of cybernetics [1] and gave the impetus to the development of theory and practice of technically complex learning systems [2]. The goal of ML is to partially or fully automate the solution of complicated tasks in various fields of human activity. The scope of ML applications is constantly expanding; however, with the rise of ML, the security problem has become an important issue. For example, many medical decisions rely on logistic regression model, and biomedical data usually contain confidential information about individuals [3] which should be treated carefully. Therefore, privacy and security of data are the major concerns, especially when deploying the outsource analysis tools. There have been several researches on secure computation based on cryptographic primitives. Nikolaenko et al. [4] presented a privacy preserving linear regression protocol on horizontally partitioned data using Yao’s garbled circuits [5]. Multi-party computation technique was also applied to privacy-preserving logistic regression [6-8]. However, this approach is vulnerable when a party behaves dishonestly, and the assumption for secret sharing is quite different from that of outsourcing computation. Homomorphic encryption (HE) is a cryptosystem that allows us to perform certain arithmetic operations on encrypted data and receive an encrypted result that corresponds to the result of operations performed in plaintext. Several papers already discussed ML with HE techniques. Wu et al. [9] used Paillier cryptosystem [10] and approximated the logistic function using polynomials, but it required an exponentially growing computational cost in the degree of the approximation polynomial. Aono et al. [11] and Xie et al. [12] used an additive HE scheme to aggregate some intermediate statistics. However, the scenario of Aono et al. relies on the client to decrypt these intermediary statistics and the method of Xie et al. requires expensive computational cost to calculate the intermediate information. The most related research of this paper is the work of Kim et al. [13] which also used HE based ML. However, the size of encrypted data and learning time were highly dependent on the number of features, so the performance for a large dataset was not practical in terms of storage and computational cost. Since 2011, the iDASH Privacy and Security Workshop has assembled specialists in privacy technology to discuss issues that apply to biomedical data sharing, as well as main stakeholders who provided an overview of the main uses of the data, different laws and regulations, and their own views on privacy. In addition, it has began to hold annual competitions on the basis of the workshop from 2014. The goal of this challenge is to evaluate the performance of state-of-the-arts methods that ensures rigorous data confidentiality during data analysis in a cloud environment. In this paper, we provide a solution to the third track of iDASH 2017 competition, which aims to develop HE based secure solutions for building a ML model (i.e., logistic regression) on encrypted data. We propose a general practical solution for HE based ML that demonstrates good performance and low storage costs. In practice, our output quality is comparable to the one of an unencrypted learning case. As a basis, we use the HE scheme for approximate arithmetic [14]. To improve the performance, we apply several additional techniques including a packing method, which reduce the required storage space and optimize the computational time. We also adapt Nesterov’s accelerated gradient [15] to increase the speed of convergence. As a result, we could obtain a high-accuracy classifier using only a small number of iterations. We give an open-source implementation [16] to demonstrate the performance of our HE based ML method. With our packing method we can encrypt the dataset with 1579 samples and 18 features using 39MB of memory. The encrypted learning time is about six minutes. We also demonstrate our implementation on the datasets used in [13] to compare the results. For example, the training of a logistic regression model took about 3.6 min with the storage about 0.02GB compared to 114 min and 0.69GB of Kim et al. [13] when a dataset consists of 1253 samples, each of which has 9 features.

Methods

Logistic regression

Logistic regression or logit model is a ML model used to predict the probability of occurrence of an event by fitting data to a logistic curve [17]. It is widely used in various fields including machine learning, biomedicine [18], genetics [19], and social sciences [20]. Throughout this paper, we treat the case of a binary dependent variable, represented by ± 1. Learning data consists of pairs (x,y) of a vector of co-variates and a dependent variable y∈{±1}. Logistic regression aims to find an optimal which maximizes the likelihood estimator or equivalently minimizes the loss function, defined as the negative log-likelihood: where z=y·(1,x) for i=1,…,n.

Gradient descent

Gradient Descent (GD) is a method for finding a local extremum (minimum or maximum) of a function by moving along gradients. To minimize the function in the direction of the gradient, one-dimensional optimization methods are used. For logistic regression, the gradient of the cost function with respect to is computed by where . Starting from an initial 0, the gradient descent method at each step t updates the regression parameters using the equation where α is a learning rate at step t.

Nesterov’s accelerated gradient

The method of GD can face a problem of zig-zagging along a local optima and this behavior of the method becomes typical if it increases the number of variables of an objective function. Many GD optimization algorithms are widely used to overcome this phenomenon. Momentum method, for example, dampens oscillation using the accumulated exponential moving average for the gradient of the loss function. Nesterov’s accelerated gradient [15] is a slightly different variant of the momentum update. It uses moving average on the update vector and evaluates the gradient at this “looked-ahead” position. It guarantees a better rate of convergence O(1/t2) (vs. O(1/t) of standard GD algorithm) after t steps theoretically, and consistently works slightly better in practice. Starting with a random initial v0=0, the updated equations for Nesterov’s Accelerated GD are as follows: where 0<γ<1 is a moving average smoothing parameter.

Approximate homomorphic encryption

HE is a cryptographic scheme that allows us to carry out operations on encrypted data without decryption. Cheon et al. [14] presented a method to construct a HE scheme for arithmetic of approximate numbers (called HEAAN in what follows). The main idea is to treat an encryption noise as part of error occurring during approximate computations. That is, an encryption ct of message by a secret key sk for a ciphertext modulus q will have a decryption structure of the form 〈ct,sk〉=m+e (mod q) for some small e. The following is a simple description of HEAAN based on the ring learning with errors problem. For a power-of-two integer N, the cyclotomic polynomial ring of dimension N is denoted by . For a positive integer ℓ, we denote the residue ring of modulo 2. KeyGen(1). For an integer L that corresponds to the largest ciphertext modulus level, given the security parameter λ, output the ring dimension N which is a power of two. Set the small distributions χkey,χerr,χenc over for secret, error, and encryption, respectively. Sample a secret s←χkey, a random and an error e←χerr. Set the secret key as sk←(1,s) and the public key as where b←−as+e (mod 2). KSGen(s′). For , sample a random and an error e′←χerr. Output the switching key as where b′←−a′s+e′+2s′(mod 22·). Set the evaluation key as evk←KSGen(s2). Enc(m). For , sample v←χenc and e0,e1←χerr. Output v·pk+(m+e0,e1) (mod 2). Dec(ct). For , output c0+c1·s (mod 2). Add(ct1,ct2). For , output ctct1+ct2 (mod 2). CMult(ct;c). For and , output ct′←c·ct (mod 2). Mult(ct1,ct2). For , let (d0,d1,d2)=(b1b2,a1b2+a2b1,a1a2) (mod 2). Output ct←(d0,d1)+⌊2−·d2·evk⌉ (mod 2). ReScale(ct;p). For a ciphertext and an integer p, output ct′←⌊2−·ct⌉ (mod 2). For a power-of-two integer k≤N/2, HEAAN provides a technique to pack k complex numbers in a single polynomial using a variant of the complex canonical embedding map . We restrict the plaintext space as a vector of real numbers throughout this paper. Moreover, we multiply a scale factor of 2 to plaintexts before the rounding operation to maintain their precision. Encode(w;p). For , output the polynomial . Decode(m;p). For a plaintext , the encoding of an array consisting of a power of two k≤N/2 messages, output the vector . The encoding/decoding techniques support the parallel computation over encryption, yielding a better amortized timing. In addition, the HEAAN scheme provides the rotation operation on plaintext slots, i.e., it enables us to securely obtain an encryption of the shifted plaintext vector (w,…,w,w0,…,w) from an encryption of (w0,…,w). It is necessary to generate an additional public information rk, called the rotation key. We denote the rotation operation as follows. Rotate(ct;r). For the rotation keys rk, output a ciphertext ct′ encrypting the rotated plaintext vector of ct by r positions. Refer [14] for the technical details and noise analysis.

Database encoding

For an efficient computation, it is crucial to find a good encoding method for the given database. The HEAAN scheme supports the encryption of a plaintext vector and the slot-wise operations over encryption. However, our learning data is represented by a matrix (z)1≤. A recent work [13] used the column-wise approach, i.e., a vector of specific feature data (z)1≤ is encrypted in a single ciphertext. Consequently, this method required (f+1) number of ciphertexts to encrypt the whole dataset. In this subsection, we suggest a more efficient encoding method to encrypt a matrix in a single ciphertext. A training dataset consists of n samples for 1≤i≤n, which can be represented as a matrix Z as follows: For simplicity, we assume that n and (f+1) are power-of-two integers satisfying logn+ log(f+1)≤ log(N/2). Then we can pack the whole matrix in a single ciphertext in a row-by-row manner. Specifically, we will identify this matrix with the k-dimensional vector by (z)1≤↦w=(w)0≤ where w=z such that ℓ=(f+1)(i−1)+j, that is, In a general case, we can pad zeros to set the number of samples and the dimension of a weight vector as powers of two. It is necessary to perform shifting operations of row and column vectors for the evaluation of the GD algorithm. In the rest of this subsection, we explain how to perform these operations using the rotation algorithm provided in the HEAAN scheme. As described above, the algorithm Rotate(ct;r) can shift the encrypted vector by r positions. In particular, this operation is useful in our implementation when r=f+1 or r=1. For the first case, a given matrix Z=(z)1≤ is converted into the matrix while the latter case outputs the matrix over encryption. The matrix Z′ is obtained from Z by shifting its row vectors and Z′′ can be viewed as an incomplete column shifting because of its last column.

Polynomial approximation of the sigmoid function

One limitation of the existing HE cryptosystems is that they only support polynomial arithmetic operations. The evaluation of the sigmoid function is an obstacle for the implementation of the logistic regression since it cannot be expressed as a polynomial. Kim et al. [13] used the least squares approach to find a global polynomial approximation of the sigmoid function. We adapt this approximation method and consider the degree 3, 5, and 7 least squares polynomials of the sigmoid function over the domain [−8,8]. We observed that the inner product values in our experimentations belong to this interval. For simplicity, a least squares polynomial of σ(−x) will be denoted by g(x) so that we have when . The approximate polynomials g(x) of degree 3, 5, and 7 are computed as follows: A low-degree polynomial requires a smaller evaluation depth while a high-degree polynomial has a better precision. The maximum errors between σ(−x) and the least squares g3(x), g5(x), and g7(x) are approximately 0.114, 0.061 and 0.032, respectively.

Homomorphic evaluation of the gradient descent

This section explains how to securely train the logistic regression model using the HEAAN scheme. To be precise, we explicitly describe a full pipeline of the evaluation of the GD algorithm. We adapt the same assumptions as in the previous section so that the whole database can be encrypted in a single ciphertext. First of all, a client encrypts the dataset and the initial (random) weight vector (0) and sends them to the public cloud. The dataset is encoded to a matrix Z of size n×(f+1) and the weight vector is copied n times to fill the plaintext slots. The plaintext matrices of the resulting ciphertexts are described as follows: As mentioned before, both Z and (0) are scaled by a factor of 2 before encryption to maintain the precision of plaintexts. We skip to mention the scaling factor in the rest of this section since every step will return a ciphertext with the scaling factor of 2. The public server takes two ciphertexts ct and and evaluates the GD algorithm to find an optimal modeling vector. The goal of each iteration is to update the modeling vector ( using the gradient of loss function: where α denotes the learning rate at the t-th iteration. Each iteration consists of the following eight steps. Step 1: For given two ciphertexts ct and , compute their multiplication and rescale it by p bits: The output ciphertext contains the values in its plaintext slots, i.e., Step 2: To obtain the inner product , the public cloud aggregates the values of in the same row. This step can be done by adapting the incomplete column shifting operation. One simple way is to repeat this operation (f+1) times, but the computational cost can be reduced down to log(f+1) by adding ct1 to its rotations recursively: for j=0,1,…, log(f+1)−1. Then the output ciphertext ct2 encrypts the inner product values in the first column and some “garbage” values in the other columns, denoted by ⋆, i.e., Step 3: This step performs a constant multiplication in order to annihilate the garbage values. It can be obtained by computing the encoding polynomial c←Encode(C;p) of the matrix using the scaling factor of for some integer p. The parameter p is chosen as the bit precision of plaintexts so it can be smaller than the parameter p. Finally we multiply the polynomial c to the ciphertext ct2 and rescale it by p bits: The garbage values are multiplied with zero while one can maintain the inner products in the plaintext slots. Hence the output ciphertext ct3 encrypts the inner product values in the first column and zeros in the others: Step 4: The goal of this step is to replicate the inner product values to other columns. Similar to Step 2, it can be done by adding the input ciphertext to its column shifting recursively, but in the opposite direction for j=0,1,…, log(f+1)−1. The output ciphertext ct4 has the same inner product value in each row: Step 5: This step simply evaluates an approximating polynomial of the sigmoid function, i.e., ct5←g(ct4) for some g∈{g3,g5,g7}. The output ciphertext encrypts the values of in its plaintext slots: Step 6: The public cloud multiplies the ciphertext ct5 with the encrypted dataset ct and rescales the resulting ciphertext by p bits: The output ciphertext encrypts the n vectors in each row: Step 7: This step aggregates the vectors to compute the gradient of the loss function. It is obtained by recursively adding ct6 to its row shifting: for j= log(f+1),…, log(f+1)+ logn−1. The output ciphertext is as desired. Step 8: For the learning rate α, it uses the parameter p to compute the scaled learning rate . The public cloud updates ( using the ciphertext ct7 and the constant Δ(: Finally it returns a ciphertext encrypting the updated modeling vector where .

Homomorphic evaluation of Nesterov’s accelerated gradient

The performance of leveled HE schemes highly depends on the depth of a circuit to be evaluated. The bottleneck of homomorphic evaluation of the GD algorithm is that we need to repeat the update of weight vector ( iteratively. Consequently, the total depth grows linearly on the number of iterations and it should be minimized for practical implementation. For the homomorphic evaluation of Nesterov’s accelerated gradient, a clients sends one more ciphertext encrypting the initial vector v(0) to the public cloud. Then the server uses an encryption ct of dataset Z to update two ciphertexts v( and at each iteration. One can securely compute ( in the same way as the previous section. Nesterov’s accelerated gradient requires one more step to compute the second equation of (1) and obtain an encryption of v( from and . Step 9: Let and let . It obtains the ciphertext by computing Then the output ciphertext is which encrypts in the plaintext slots.

Results

In this section, we present parameter sets with experimental results. Our implementation is based on the HEAAN library [21] that implements the approximate HE scheme of Cheon et al. [14]. The source code is publicly available at github [16].

Parameters settings

We explain how to choose the parameter sets for the homomorphic evaluation of the (Nesterov’s) GD algorithm with security analysis. We start with the parameter L - the bitsize of a fresh ciphertext modulus. The modulus of a ciphertext is reduced after the ReScale operations and the evaluation of an approximate polynomial g(x). The ReScale procedures after homomorphic multiplications (step 1 and 6) reduce the ciphertext modulus by p bits while the ReScale procedures after constant multiplications (step 3 and 8) require p bits of modulus reduction. Note that the ciphertext modulus remains the same for the step 9 for the Nesterov’s accelerated gradient if we compute step 8 and 9 together using some precomputed constants. We use a similar method with a previous work for the evaluation of the sigmoid function (see [13] for details); the ciphertext modulus is reduced by (2p+3) bits for the evaluation of g3(x), and (3p+3) bits for that of g5(x) and g7(x). Therefore, we obtain the following lower bound on the parameter L: where ITERNUM is the number of iterations of the GD algorithm and L0 denotes the bit size of the output ciphertext modulus. The modulus of the output ciphertext should be larger than 2 in order to encrypt the resulting weight vector and maintain its precision. We take p=30, p=20 and L0=35 in our implementation. The dimension of a cyclotomic ring is chosen as N=216 following the security estimator of Albrecht et al. [22] for the learning with errors problem. In this case, the bit size L of a fresh ciphertext modulus should be bounded by 1284 to ensure the security level λ=80 against known attacks. Hence we repeat ITERNUM=9 iterations of GD algorithm g=g3, and ITERNUM=7 iterations when g=g5 or g=g7. The smoothing parameter γ is chosen in accordance with [15]. The choice of proper GD learning rate parameter α normally depends on the problem at hand. Choosing too small α leads to a slow convergence, and choosing too large α could lead to a divergence, or a fluctuation near a local optima. It is often optimized by a trial and error method, which we are not available to perform. Under these conditions harmonic progression seems to be a good candidate and we choose a learning rate in our implementation.

Implementation

All the experimentations were performed on a machine with an Intel Xeon CPU E5-2620 v4 at 2.10 GHz processor. Task for the iDASH challenge. In genomic data privacy and security protection competition 2017, the goal of Track 3 was to devise a weight vector to predict the disease using the genotype and phenotype data (Additional file 1: iDASH). This dataset consists of 1579 samples, each of which has 18 features and a cohort information (disease vs. healthy). Since we use the ring dimension N=216, we can only pack up to N/2=215 dataset values in a single ciphertext but we have totally 1579×19>215 values to be packed. We can overcome this issue by dividing the dataset into two parts of sizes 1579×16 and 1579×3 and encoding them separately into two ciphertexts. In general, this method can be applied to the datasets with any number of features: the dataset can be encrypted into ⌈(f+1)·n·(N/2)−1⌉ ciphertexts. In order to estimate the validity of our method, we utilized 10-fold cross-validation (CV) technique: it randomly partitions the dataset into ten folds with approximately equal sizes, and uses every subset of 9 folds for training and the rest one for testing the model. The performance of our solution including the average running time per fold of 10-fold CV (encryption and evaluation) and the storage (encrypted dataset) are shown in Table 1. This table also provides the average accuracy and the AUC (Area Under the Receiver Operating Characteristic Curve) which estimate the quality of a binary classifier.
Table 1

Implementation results for iDASH dataset with 10-fold CV

SampleFeaturedeggIterEncLearnStorageAccuracyAUC
numnumnumtimetime
157918394s7.94 min0.04 GB61.72%0.677
574s6.07 min0.04 GB62.87%0.689
774s7.01 min0.04 GB62.36%0.689
Implementation results for iDASH dataset with 10-fold CV Comparison We present some experimental results to compare the performance of implementation to [13]. For a fair comparison, we use the same 5-fold CV technique on five datasets - the Myocardial Infarction dataset from Edinburgh [23] (Additional file 2: Edinburgh), Low Birth Weight Study (Additional file 3: lbw), Nhanes III (Additional file 4: nhanes3), Prostate Cancer Study (Additional file 5: pcs), and Umaru Impact Study datasets (Additional file 6: uis) [24-27]. All datasets have a single binary outcome variable. All the experimental results are summarized in Table 2. Our new packing method could reduce the storage of ciphertexts and the use of Nesterov’s accelerated gradient achieves much higher speed than the approach of [13]. For example, it took 3.6 min to train a logistic regression model using the encrypted Edinburgh dataset of size 0.02 GB, compared to 114 min and 0.69 GB of the previous work [13], while achieving the good qualities of the output models.
Table 2

Implementation results for other datasets with 5-fold CV

DatasetSampleFeatureMethoddeggIterEncLearnStorageAccuracyAUC
numnumnumtimetime
Edinburgh12539Ours572s3.6 min0.02 GB91.04%0.958
[13]32512s114 min0.69 GB86.03%0.956
[13]72012s114 min0.71 GB86.19%0.954
lbw1899Ours572s3.3 min0.02 GB69.19%0.689
[13]32511s99 min0.67 GB69.30%0.665
[13]72011s86 min0.70 GB69.29%0.678
nhanes31564915Ours5714s7.3 min0.16 GB79.22%0.717
[13]32521s235 min1.15 GB79.23%0.732
[13]72021s208 min1.17 GB79.23%0.737
pcs3799Ours572s3.5 min0.02 GB68.27%0.740
[13]32511s103 min0.68 GB68.85%0.742
[13]72011s97 min0.70 GB69.12%0.750
uis5758Ours572s3.5 min0.02 GB74.44%0.603
[13]32510s104 min0.61 GB74.43%0.585
[13]72010s96 min0.63 GB75.43%0.617
Implementation results for other datasets with 5-fold CV

Discussion

The rapid growth of computing power initiated the study of more complicated ML algorithms in various fields including biomedical data analysis [28, 29]. HE system is a promising solution for the privacy issue, but its efficiency in real applications remains as an open question. It would be great if we could extend this work to other ML algorithms such as deep learning. One constraint in our approach is that the number of iterations of GD algorithm is limited depending on the choice of HE parameter. In terms of asymptotic complexity, applying the bootstrapping method of approximate HE scheme [30] to the GD algorithm would achieve a linear computation cost on the iteration number.

Conclusion

In the paper, we presented a solution to homomorphically evaluate the learning phase of logistic regression model using the gradient descent algorithm and the approximate HE scheme. Our solution demonstrates a good performance and the quality of learning is comparable to the one of an unencrypted case. Our encoding method can be easily extended to a large-scale dataset, which shows the practical potential of our approach. iDASH. iDASH challenge dataset (TXT 59 kb) Edinburgh. The Myocardial Infarction dataset from Edinburgh (TXT 24 kb) lbw. Low Birth Weight Study dataset (TXT 4 kb) nhanes3. Nhanes III dataset (TXT 745 kb) pcs. Prostate Cancer Study dataset (TXT 9 kb) uis. Umaru Impact Study dataset (TXT 11 kb)
  6 in total

Review 1.  Introduction to genetic association studies.

Authors:  Cathryn M Lewis; Jo Knight
Journal:  Cold Spring Harb Protoc       Date:  2012-03-01

2.  Early diagnosis of acute myocardial infarction using clinical and electrocardiographic data at presentation: derivation and evaluation of logistic regression models.

Authors:  R L Kennedy; A M Burton; H S Fraser; L N McStay; R F Harrison
Journal:  Eur Heart J       Date:  1996-08       Impact factor: 29.983

3.  Death risk in hemodialysis patients: the predictive value of commonly measured variables and an evaluation of death rate differences between facilities.

Authors:  E G Lowrie; N L Lew
Journal:  Am J Kidney Dis       Date:  1990-05       Impact factor: 8.860

Review 4.  Deep Learning for Health Informatics.

Authors:  Daniele Ravi; Charence Wong; Fani Deligianni; Melissa Berthelot; Javier Andreu-Perez; Benny Lo; Guang-Zhong Yang
Journal:  IEEE J Biomed Health Inform       Date:  2016-12-29       Impact factor: 5.772

5.  A secure distributed logistic regression protocol for the detection of rare adverse drug events.

Authors:  Khaled El Emam; Saeed Samet; Luk Arbuckle; Robyn Tamblyn; Craig Earle; Murat Kantarcioglu
Journal:  J Am Med Inform Assoc       Date:  2012-08-07       Impact factor: 4.497

6.  Secure Logistic Regression Based on Homomorphic Encryption: Design and Evaluation.

Authors:  Miran Kim; Yongsoo Song; Shuang Wang; Yuhou Xia; Xiaoqian Jiang
Journal:  JMIR Med Inform       Date:  2018-04-17
  6 in total
  11 in total

1.  A Full RNS Variant of Approximate Homomorphic Encryption.

Authors:  Jung Hee Cheon; Kyoohyung Han; Andrey Kim; Miran Kim; Yongsoo Song
Journal:  Sel Areas Cryptogr       Date:  2019-01-13

2.  Web-Based Privacy-Preserving Multicenter Medical Data Analysis Tools Via Threshold Homomorphic Encryption: Design and Development Study.

Authors:  Yao Lu; Tianshu Zhou; Yu Tian; Shiqiang Zhu; Jingsong Li
Journal:  J Med Internet Res       Date:  2020-12-08       Impact factor: 5.428

3.  Increasing adoption rates at animal shelters: a two-phase approach to predict length of stay and optimal shelter allocation.

Authors:  Janae Bradley; Suchithra Rajendran
Journal:  BMC Vet Res       Date:  2021-02-05       Impact factor: 2.741

4.  High performance logistic regression for privacy-preserving genome analysis.

Authors:  Martine De Cock; Rafael Dowsley; Anderson C A Nascimento; Davis Railsback; Jianwei Shen; Ariel Todoki
Journal:  BMC Med Genomics       Date:  2021-01-20       Impact factor: 3.063

5.  Secure tumor classification by shallow neural network using homomorphic encryption.

Authors:  Seungwan Hong; Jai Hyun Park; Wonhee Cho; Hyeongmin Choe; Jung Hee Cheon
Journal:  BMC Genomics       Date:  2022-04-09       Impact factor: 3.969

6.  Achieving GWAS with homomorphic encryption.

Authors:  Jun Jie Sim; Fook Mun Chan; Shibin Chen; Benjamin Hong Meng Tan; Khin Mi Mi Aung
Journal:  BMC Med Genomics       Date:  2020-07-21       Impact factor: 3.063

7.  Semi-Parallel logistic regression for GWAS on encrypted data.

Authors:  Miran Kim; Yongsoo Song; Baiyu Li; Daniele Micciancio
Journal:  BMC Med Genomics       Date:  2020-07-21       Impact factor: 3.063

8.  Optimized homomorphic encryption solution for secure genome-wide association studies.

Authors:  Marcelo Blatt; Alexander Gusev; Yuriy Polyakov; Kurt Rohloff; Vinod Vaikuntanathan
Journal:  BMC Med Genomics       Date:  2020-07-21       Impact factor: 3.063

9.  Privacy-preserving semi-parallel logistic regression training with fully homomorphic encryption.

Authors:  Sergiu Carpov; Nicolas Gama; Mariya Georgieva; Juan Ramon Troncoso-Pastoriza
Journal:  BMC Med Genomics       Date:  2020-07-21       Impact factor: 3.063

10.  Privacy-preserving approximate GWAS computation based on homomorphic encryption.

Authors:  Duhyeong Kim; Yongha Son; Dongwoo Kim; Andrey Kim; Seungwan Hong; Jung Hee Cheon
Journal:  BMC Med Genomics       Date:  2020-07-21       Impact factor: 3.063

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.