Skip to content

Here, we use a conditional deep convolutional generative adversarial network (cDCGAN) to inverse design across multiple classes of metasurfaces. Reference: https://onlinelibrary.wiley.com/doi/10.1002/adom.202100548

Notifications You must be signed in to change notification settings

Raman-Lab-UCLA/Multiclass_Metasurface_InverseDesign

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

78 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Multiclass_Metasurface_InverseDesign

Introduction

Welcome to the Raman Lab GitHub! This repo will walk you through the code used in the following publication: https://onlinelibrary.wiley.com/doi/10.1002/adom.202100548

Here, we use a conditional deep convolutional generative adversarial network (cDCGAN) to inverse design across multiple classes of metasurfaces.

Requirements

The following software is required to run the provided scripts. As of this writing, the versions below have been tested and verified. Training on GPU is recommended due to lengthy training times with GANs.

-Python 3.7

-Pytorch 1.9.0

-CUDA 10.2 (Recommended for training on GPU)

-OpenCV 3.4.2 (Depends on Python 3.7, Python 3.8 is not supported as of this writing)

-Scipy 1.6.2

-Matplotlib

-ffmpeg

-Pandas

-Spyder

Installation instructions for Pytorch (with CUDA) are at: https://pytorch.org/. For convenience, here are installation commands for the Conda distribution (after installing Anaconda: https://www.anaconda.com/products/individual).

conda create -n myenv python=3.7
conda activate myenv
conda install pytorch torchvision torchaudio cudatoolkit=10.2 -c pytorch
conda install -c anaconda opencv
conda install -c anaconda scipy
conda install matplotlib
conda install -c conda-forge ffmpeg
conda install pandas
conda install spyder

Steps

0) Setup 'ffmpeg':

Go to the 'Utilities/SaveAnimation.py' file and update the following line to setup 'ffmpeg' (Linux):

plt.rcParams['animation.ffmpeg_path'] = '/home/ramanlab/anaconda3/pkgs/ffmpeg-3.1.3-0/bin/ffmpeg'

Refer to here for more information (or for working on Windows): https://stackoverflow.com/questions/23856990/cant-save-matplotlib-animation. Alternatively, comment out the 'save_video' line in 'DCGAN_Train.py'.

1) Train the cDCGAN (DCGAN_Train.py)

Download the files in the 'Training Data' and 'Results' folders and update the following lines in the 'DCGAN_Train.py' file:

#Location of Training Data
spectra_path = 'C:/.../absorptionData_HybridGAN.csv'

#Location to Save Models (Generators and Discriminators)
save_dir = 'C:/.../'

#Root directory for dataset (images must be in a subdirectory within this folder)
img_path = 'C:/.../Images'

Running this file will train the cDCGAN and save the models in the specified location (every 50 epochs). Since model performance depends on trained epochs, multiple generators are saved in a single training session. Based on our tests with our training data, the optimal generator is at about 500 epochs (which may differ for different datasets). Depending on the available hardware, the training process can take up to a few hours. After training, the following files will also be produced:

1.1) Log file showing losses and total training time (training_log.txt):

Start Time = Thu Jul  1 11:02:47 2021
[0/500][0/1174]	Loss_D: 2.0491	Loss_G: 19.2079	D(x): 0.6574	D(G(z)): 0.6823 / 0.0000
[0/500][50/1174]	Loss_D: 4.1192	Loss_G: 6.7932	D(x): 0.6742	D(G(z)): 0.9405 / 0.0028
...

1.2) Video showing generator outputs per epoch (animation.mp4):

1.3) Plots of Generator and Discriminator losses (losses.png):

For more detailed interpretation of the losses, please refer to: https://pytorch.org/tutorials/beginner/dcgan_faces_tutorial.html

2) Load cDCGAN & Predict by Inputting Target Spectrum (DCGAN_Predict.py)

Update the following lines in the 'DCGAN_Predict.py' file:

#Location of Saved Generator
netGDir='C:/.../*.netG__.pt'

#Location of Training Data
spectra_path = 'C:/.../absorptionData_HybridGAN.csv'

Running this file will pass several spectra into the GAN, thereby producing multiple colored images. Colored images are converted to black and white, then to binary for importing into Lumerical FDTD (commercial EM solver). Material properties are saved in the 'properties.txt' file.

3) Generate Simulation Model - Lumerical FDTD (DCGAN_FDTD.lsf)

To validate the designs generated by the cDCGAN, this repo is integrated with Lumerical FDTD. From Lumerical's script editor, run the 'DCGAN_FDTD.lsf' file and ensure that the binary and 'Master.fsp' files are in the same folder (default: '.../Results'). If done correctly, Lumerical models will be generated that reflect the GAN outputs.

4) Notes

4.1) How to Address Potential Errors

  1. If you get the following error:
BrokenPipeError: [Errno 32] Broken pipe

you are probably running on Windows and need to set 'workers = 0'. More details are described in the script comments.

4.2) How to Generalize the Code

As stated in the publication, we believe our approach can be applied to any/different material design problems. However, several changes must be made, which may not be obvious at first glance if you are not familiar with Python/Pytorch. Here are several recommendations on how to adapt the code to different design problems:

• Use a column/row definition of training data, where the columns are number of design parameters and rows are design instances.

• If grayscale images are prefered, a grayscale transformation is needed when defining the dataset.

• Related to the above point, changes in image dimensions or channels should be accompanied by corresponding changes to 'nc' field.

• Most of the 'DCGAN_Predict.py' script is not needed (lines 93 and beyond) if you only want to generate images using the DCGAN. The rest of the code here is for custom Lumerical support, but play close attention to lines 70-91 for loading and passing inputs into the generator.

• Changes to Generator/Discriminator hyperparameters (in 'DCGAN_Train.py') must be accompanied by the same changes to 'DCGAN_Predict.py', since Pytorch requires that the model be redefined when loading it from scratch.

Citation

If you find this repo helpful, or use any of the code you find here, please cite our work using the following:

C. Yeung, et al. Global Inverse Design across Multiple Photonic Structure Classes Using Generative Deep Learning. Advanced Optical Materials, 2021. 

About

Here, we use a conditional deep convolutional generative adversarial network (cDCGAN) to inverse design across multiple classes of metasurfaces. Reference: https://onlinelibrary.wiley.com/doi/10.1002/adom.202100548

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages