Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature: Automated Creation Based on Example for PyTorch Linear Modules with ReLU Activations #126

Open
wants to merge 14 commits into
base: master
Choose a base branch
from

Conversation

git-thor
Copy link

@git-thor git-thor commented Jan 14, 2022

Work in progress

The PR addresses #124

Automated generation from PyTorch module class ``torch.nn.modulechild, leveragingtorchinfo` architecture summary interface, comparable with TensorFlow/Keras `summary` method.

image

This is created from the following code:

# Define example module
import torch as th

class MLP(th.nn.Module):

    def __init__(self):
        super(MLP, self).__init__()
        self.net = th.nn.Sequential(
            th.nn.Linear(2, 16),
            th.nn.ReLU(),
            th.nn.Linear(16, 16),
            th.nn.ReLU(),
            th.nn.Linear(16, 1)
        )

    def forward(self, x):
        x = self.net(x)
        return x

# Parse the example module
from pycore.torchparse import TorchArchParser
from pycore.tikzeng import to_generate

mlp = MLP()
parser = TorchArchParser(torch_module=mlp, input_size=(1,2))
arch = parser.get_arch()
to_generate(arch, pathname="./test_torch_mlp.tex")
  • Initial PyTorch support with Linear and ReLU in Sequential module structure
  • Added interface for custom fill colors, hack of Conv for fully connected layer until a specialized function is provided.

TODOs for subsequent PRs

  • Variable generation with respect to PyTorch layer and Activation Functions~~
  • Keras support

Addressed #124 with respect to PyTorch

@git-thor
Copy link
Author

git-thor commented Jan 14, 2022

Please feel free to feed back and contribute to this feature. Especially regarding the subsequent Keras support.

Copy link

@MyGodItsFull0fStars MyGodItsFull0fStars left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great job on this pull request! Clear structure and use of examples.

pyexamples/test_torch_mlp.py Outdated Show resolved Hide resolved
pycore/torchparse.py Show resolved Hide resolved
arch.append(pnn.to_begin())
for idx, layer in enumerate(summary_list[2:], start=1):

if layer.class_name == "Linear":

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If more module types would be parsed in the future, having helper functions or a builder class corresponding to the layer.class_name would omit cluttering the parse() function with further if conditions.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree and this becomes even more important with increased types of supported PyTorch modules (layers, activations). Though, I have oriented myself on the coding style in blocks.py and tikzeng.py.

Further, I propose to abstract parsing and constructing the tikz code toward a collection of layers that map to a similar tikz representation and a general activation representation that maps to all possible PyTorch-implemented activation functions.

Though, all of this could - and imo should - be an improvement and extension on-top of this basic functionaility.

pycore/torchparse.py Outdated Show resolved Hide resolved
pycore/torchparse.py Show resolved Hide resolved
Copy link
Author

@git-thor git-thor left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed comments from @MyGodItsFull0fStars

pycore/torchparse.py Show resolved Hide resolved
pycore/torchparse.py Show resolved Hide resolved
arch.append(pnn.to_begin())
for idx, layer in enumerate(summary_list[2:], start=1):

if layer.class_name == "Linear":
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree and this becomes even more important with increased types of supported PyTorch modules (layers, activations). Though, I have oriented myself on the coding style in blocks.py and tikzeng.py.

Further, I propose to abstract parsing and constructing the tikz code toward a collection of layers that map to a similar tikz representation and a general activation representation that maps to all possible PyTorch-implemented activation functions.

Though, all of this could - and imo should - be an improvement and extension on-top of this basic functionaility.

pycore/torchparse.py Outdated Show resolved Hide resolved
pyexamples/test_torch_mlp.py Outdated Show resolved Hide resolved
@git-thor git-thor changed the title [WIP] Feature: Automated Creation Based On PyTorch/Keras Feed Forward Modules Feature: Automated Creation Based On PyTorch/Keras Feed Forward Modules Jul 9, 2023
@git-thor
Copy link
Author

git-thor commented Jul 9, 2023

Ready as initial functionality for PyTorch automated generation support. Please review and merge if deemed OK.

@git-thor git-thor changed the title Feature: Automated Creation Based On PyTorch/Keras Feed Forward Modules Feature: Automated Creation Based On PyTorchFeed Forward Modules Jul 9, 2023
@git-thor git-thor changed the title Feature: Automated Creation Based On PyTorchFeed Forward Modules Feature: Automated Creation Based on Example for PyTorch Linear Modules with ReLU Activations Jul 9, 2023
@space192
Copy link

hey guys, any plan to support conv layer ?

@git-thor
Copy link
Author

git-thor commented Jan 26, 2024

Hey @space192 I am currently occupied and hindered to push the PR further but we happily accept your extension to CNNs. You can create a PR regarding this to that branch of my fork - so it shows up here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants