| Literature DB >> 35002664 |
Yanan Bai1,2, Quanliang Liu2,3, Wenyuan Wu1, Yong Feng1.
Abstract
The emerging topic of privacy-preserving deep learning as a service has attracted increasing attention in recent years, which focuses on building an efficient and practical neural network prediction framework to secure client and model-holder data privately on the cloud. In such a task, the time cost of performing the secure linear layers is expensive, where matrix multiplication is the atomic operation. Most existing mix-based solutions heavily emphasized employing BGV-based homomorphic encryption schemes to secure the linear layer on the CPU platform. However, they suffer an efficiency and energy loss when dealing with a larger-scale dataset, due to the complicated encoded methods and intractable ciphertext operations. To address it, we propose cuSCNN, a secure and efficient framework to perform the privacy prediction task of a convolutional neural network (CNN), which can flexibly perform on the GPU platform. Its main idea is 2-fold: (1) To avoid the trivia and complicated homomorphic matrix computations brought by BGV-based solutions, it adopts GSW-based homomorphic matrix encryption to efficiently enable the linear layers of CNN, which is a naive method to secure matrix computation operations. (2) To improve the computation efficiency on GPU, a hybrid optimization approach based on CUDA (Compute Unified Device Architecture) has been proposed to improve the parallelism level and memory access speed when performing the matrix multiplication on GPU. Extensive experiments are conducted on industrial datasets and have shown the superior performance of the proposed cuSCNN framework in terms of runtime and power consumption compared to the other frameworks.Entities:
Keywords: GPU computation; cloud computing; convolutional neural network; deep learning; homomorphic encryption; privacy-preserving
Year: 2021 PMID: 35002664 PMCID: PMC8734535 DOI: 10.3389/fncom.2021.799977
Source DB: PubMed Journal: Front Comput Neurosci ISSN: 1662-5188 Impact factor: 2.380
Figure 1The privacy question of the deep learning model deployed on an untrusted cloud.
Meaning of notation in the homomorphic encryption scheme.
|
|
|
|---|---|
| ∥ | The maximum norm of |
| ∥ | The Euclidean norm of |
| < | The inner product of two vectors |
|
| The |
|
| The column concatenation of |
|
| The row concatenation of |
|
| The |
| The submatrix consisting of rows | |
|
| |
|
| The identity matrix with size of |
|
| The matrix with 1 in the position ( |
| λ | Security parameters, the scheme can resist 2λ attacks |
|
| Modulus |
| Rounding | |
| ⌈ | Rounding up |
| ⌊ | Rounding down |
Figure 2CUDA kernel and memory hierarchies.
Figure 3Security interactive computation protocol of cuSCNN.
Layers description of CNN.
|
|
|
|---|---|
| Layer-1[Conv-1] | Input image: 28 × 28, kernel size: 5 × 5, stride: (1,1), number of output channels: 5, padding = VALID, activation = ReLU. |
| Layer-2[FC-1] | Fully connecting with 5 × 3 × 3 = 845 inputs and 100 outputs, activation = ReLU. |
| Layer-3[FC-2] | Fully connecting with 100 inputs and 10 outputs activation = softmax. |
Figure 4Hybrid optimization approach on GPU.
Figure 5Performance of matrix multiplication methods on GPU.
The comparison result of homomorphic matrix encryption schemes.
|
|
|
|
|
|
|
|---|---|---|---|---|---|
| 32 × 32 | seIMC | 6.998 | 7.345 | 10.639 | 0.0768 |
| Jiang's | 0.09 | 0.01 | 15.592 | 0.0543 | |
| Ours | 0.679 | 0.204 |
| 0.067 | |
| 64 × 64 | seIMC | 7.82 | 8.21 | 12.287 | 0.312 |
| Jiang's | 0.196 | 0.01 | 37.793 | 0.705 | |
| Ours | 0.8 | 0.233 |
| 0.222 | |
| 128 × 128 | seIMC | 9.843 | 10.402 | 15.824 | 1.305 |
| Jiang's | – | – | – | – | |
| Ours | 1.127 | 0.291 |
| 0.862 |
Bold values indicate our methods have a lower running time than the comparison methods.
Benchmarks of cuSCNN in Conv and FC layers.
|
|
|
|
|
| ||
|---|---|---|---|---|---|---|
|
|
|
| ||||
| Conv layer | (28 × 28 × 1, 846) | (5 × 5 × 1, 5) | 2696.9636 | 0.0074 | 2696.971 | 3.19 |
| FC layer | (846, 846) | (100, 846) | 820.523 | 0.077 | 820.6 | 0.97 |
| (101, 846) | (10, 846) | 760.109 | 0.091 | 760.2 | 0.9 | |
Figure 6Performance comparison of privacy-preserving neural network frameworks in runtime and power consumption.