Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why is my encrypted Linear Classifier slower than encrypted ConvNet classifier ? #477

Open
mayank64ce opened this issue Apr 19, 2024 · 0 comments
Labels
Type: Question ❔ Question about implementation or some technical aspect

Comments

@mayank64ce
Copy link

Question

Why is my encrypted Linear Classifier slower than encrypted ConvNet classifier ?

Further Information

I am trying to evaluate an encrypted Linear and an encrypted CNN classifier that I encrypted using the TenSEAL python library.

class EncLinearNet:
    def __init__(self, state_dict):
        self.fc_weight = state_dict['fc.weight'].T.data.tolist()
        self.fc_bias = state_dict['fc.bias'].data.tolist()
        
    def forward(self, enc_x):
        enc_x = enc_x.mm(self.fc_weight) + self.fc_bias
        return enc_x
    
    def __call__(self, *args, **kwargs):
        return self.forward(*args, **kwargs)

and

class EncConvNet:
    def __init__(self, torch_nn):
        self.conv1_weight = torch_nn.conv1.weight.data.view(
            torch_nn.conv1.out_channels, torch_nn.conv1.kernel_size[0],
            torch_nn.conv1.kernel_size[1]
        ).tolist()
        self.conv1_bias = torch_nn.conv1.bias.data.tolist()
        
        self.fc1_weight = torch_nn.fc1.weight.T.data.tolist()
        self.fc1_bias = torch_nn.fc1.bias.data.tolist()
        
        self.fc2_weight = torch_nn.fc2.weight.T.data.tolist()
        self.fc2_bias = torch_nn.fc2.bias.data.tolist()
        
        
    def forward(self, enc_x, windows_nb):
        # conv layer
        enc_channels = []
        for kernel, bias in zip(self.conv1_weight, self.conv1_bias):
            y = enc_x.conv2d_im2col(kernel, windows_nb) + bias
            enc_channels.append(y)
        # pack all channels into a single flattened vector
        enc_x = ts.CKKSVector.pack_vectors(enc_channels)
        # square activation
        enc_x.square_()
        # fc1 layer
        enc_x = enc_x.mm(self.fc1_weight) + self.fc1_bias
        # square activation
        enc_x.square_()
        # fc2 layer
        enc_x = enc_x.mm(self.fc2_weight) + self.fc2_bias
        return enc_x
    
    def __call__(self, *args, **kwargs):
        return self.forward(*args, **kwargs)

Here are the encryption parameters:

## Encryption Parameters

# controls precision of the fractional part
bits_scale = 26

# Create TenSEAL context
context = ts.context(
    ts.SCHEME_TYPE.CKKS,
    poly_modulus_degree=8192,
    coeff_mod_bit_sizes=[31, bits_scale, bits_scale, bits_scale, bits_scale, bits_scale, bits_scale, 31]
)

# set the scale
context.global_scale = pow(2, bits_scale)

# galois keys are required to do ciphertext rotations
context.generate_galois_keys()

The input to both of them is an encrypted vector of type tenseal.tensors.ckksvector.CKKSVector .

The input of EncLinearNet (enc_x) is encrypted vector of size 784, while that of EncConvNet is an encrypted vector of size 4096.

Just before the first linear layer computation in EncConvNet, enc_x is of size 256 and the expected output size is 64.

The problem is the forward pass of the EncConvNet takes about 0.5 seconds while that of EncLinearNet takes 2.5 s .

I don't understand why ?

All code is modified from :https://github.com/OpenMined/TenSEAL/blob/13486592953f82ca60502fd196016f815891e25a/tutorials/Tutorial%204%20-%20Encrypted%20Convolution%20on%20MNIST.ipynb

System Information

  • OS: Ubuntu 23.10
  • Language Version: Python 3.10
  • Package Manager Version: conda 24.3.0
@mayank64ce mayank64ce added the Type: Question ❔ Question about implementation or some technical aspect label Apr 19, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Type: Question ❔ Question about implementation or some technical aspect
Projects
None yet
Development

No branches or pull requests

1 participant