Coder Social home page Coder Social logo

waleedka / hiddenlayer Goto Github PK

View Code? Open in Web Editor NEW
1.8K 44.0 264.0 4.47 MB

Neural network graphs and training metrics for PyTorch, Tensorflow, and Keras.

License: MIT License

Python 100.00%
pytorch tensorflow deeplearning visualization keras tensorboard

hiddenlayer's People

Contributors

mocuto avatar philferriere avatar ss18 avatar waleedka avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

hiddenlayer's Issues

Drawing error

Why did I use your code to draw the picture in a mess? I don't know where it is wrong because I just copy your code on github.
 mess

Save network plot to file

Hello,

I saw there is a possibility to save training stat plots into png files, but I couldn't find an option to save the actual NN architecture plot into a file, is this currently possible?

Thanks

UnboundLocalError: local variable 'FRAMEWORK_TRANSFORMS' referenced before assignment

when building a graph with "hl.build_graph(model)" ,model is a tensorflow net, it occurs:

hiddenlayer-master/hiddenlayer/graph.py in build_graph(model, args, input_names, transforms, framework_transforms)
147 if framework_transforms:
148 if framework_transforms == "default":
--> 149 framework_transforms = FRAMEWORK_TRANSFORMS
150 for t in framework_transforms:
151 g = t.apply(g)

UnboundLocalError: local variable 'FRAMEWORK_TRANSFORMS' referenced before assignment

Is it possible to control which part of the model graph is print

The graph is useful for visualizing model, but when the model is big, printing the entire model is not too helpful. Say I already have a structure of the model(a OrderDict or groups of layer), is that possible to print only part of a model graph.

I am using PyTorch with a pre-trained Resnet, I am only interested in the layer after the Resnet Encoder part. Ideally, I would like to print something like ResNet(Maybe just a manual input argument) + details of the interested layer groups.

ValueError on vgg16_bn unless I first train for one epoch

I'm able to build and display a graph right away on the torchvision alexnet model. But on the torchvision vgg16_bn model, if try to graph the model before training I get the error, ValueError: Expected more than 1 value per channel when training, got input size torch.Size([1, 1024]) Even though I passed in hl.build_graph(model, torch.zeros((1,3,224,224).cuda()). If I train the model for 1 epoch, then I'm able to build and display the graph properly.

Edit: Actually the torchvision model copied exactly works, but if I create a fastai learner with this model (which removes the head and adds a custom head with adaptive pooling) then I get this issue.

Edit 2: if I set the model to eval mode before trying to graph it works

Any idea why this is the case?

Argments Problem....

If my forward function has dict argment....what can I do?
For example....

import torch
import torch.nn as nn
import hiddenlayer as hl
import torch.jit as jit

class TestNet(nn.Module):

    def __init__(self):
        super(TestNet, self).__init__()
        self.conv = nn.Conv2d(3, 3, 1, 1)
        self.relu = nn.ReLU(True)

    def forward(self, x , y, loss):
        if loss['flag']:
            x = self.conv(x)
        else:
            assert False
        return x
x = torch.randn(1,3,224,224)
net = TestNet()
graph = hl.build_graph(net, (x, x, {'flag': True}))

And it will got wrong info as below...............

Traceback (most recent call last):
  File "/home/gaozhihua/program/mmdetection/ignore_dir/1.py", line 27, in <module>
    graph = hl.build_graph(net, (x, x, {'flag': True}))
  File "/home/gaozhihua/program/hiddenlayer/hiddenlayer/graph.py", line 143, in build_graph
    import_graph(g, model, args)
  File "/home/gaozhihua/program/hiddenlayer/hiddenlayer/pytorch_builder.py", line 70, in import_graph
    trace, out = torch.jit.get_trace_graph(model, args)
  File "/home/gaozhihua/anaconda2/envs/open-mmlab/lib/python3.6/site-packages/torch/jit/__init__.py", line 196, in get_trace_graph
    return LegacyTracedModule(f, _force_outplace)(*args, **kwargs)
  File "/home/gaozhihua/anaconda2/envs/open-mmlab/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/gaozhihua/anaconda2/envs/open-mmlab/lib/python3.6/site-packages/torch/jit/__init__.py", line 242, in forward
    in_vars, in_desc = _flatten(args)
RuntimeError: Only tuples, lists and Variables supported as JIT inputs, but got dict

What shuould I do?
@waleedka @FerumFlex @ss18

NotImplementedError

Getting NotImplementedError when trying to visualize graph for 3dcnn network in pytorch

Shape visualization problem

Hi:
Thank you for you amazing work! But I have a problem about the visualization:How can I get more shape info? Can you tell me the solution?
best,
yuxiang

vgg11 3

unable to handle batchnorm1d in pytorch

A simple example, no issues
image
When added batchnorm1d between layers,

Expected more than 1 value per channel when training, got input size torch.Size([1, 100])

It seems like it's treating batchnorm1d as batchnorm2d?

ValueError: model input param must be a PyTorch, TensorFlow, or Keras-with-TensorFlow-backend model

**I have the problem. I want to visualize this code, but my code had error:
ValueError: model input param must be a PyTorch, TensorFlow, or Keras-with-TensorFlow-backend model. I don't know, how do I set the input number? ( I am using the pytorch tool)
I did:

  • [ ]

                     " import hiddenlayer as hl
                        input = torch.zeros([1, 3, 64, 64, 64])
                        model = VoxResNet()
                        hl.build_graph(model, (input))"
    

I am following this code from GitHub: https://github.com/Ryo-Ito/brain_segmentation. Please, help me, visualizer. Thank you.**

import chainer
import chainer.functions as F
import chainer.links as L
import torch

class VoxResModule(chainer.Chain):
"""
Voxel Residual Module
input
BatchNormalization, ReLU
Conv 64, 3x3x3
BatchNormalization, ReLU
Conv 64, 3x3x3
output
"""

def init(self):
initW = chainer.initializers.HeNormal(scale=0.01)
super().init()

with self.init_scope():
    self.bnorm1 = L.BatchNormalization(size=64)
    self.conv1 = L.ConvolutionND(3, 64, 64, 3, pad=1, initialW=initW)
    self.bnorm2 = L.BatchNormalization(size=64)
    self.conv2 = L.ConvolutionND(3, 64, 64, 3, pad=1, initialW=initW)

def call(self, x):
h = F.relu(self.bnorm1(x))
h = self.conv1(h)
h = F.relu(self.bnorm2(h))
h = self.conv2(h)
return h + x
class VoxResNet(chainer.Chain):
"""Voxel Residual Network"""

def init(self, in_channels=1, n_classes=4):
init = chainer.initializers.HeNormal(scale=0.01)
super().init()

with self.init_scope():
    self.conv1a = L.ConvolutionND(
        3, in_channels, 32, 3, pad=1, initialW=init)
    self.bnorm1a = L.BatchNormalization(32)
    self.conv1b = L.ConvolutionND(
        3, 32, 32, 3, pad=1, initialW=init)
    self.bnorm1b = L.BatchNormalization(32)
    self.conv1c = L.ConvolutionND(
        3, 32, 64, 3, stride=2, pad=1, initialW=init)
    self.voxres2 = VoxResModule()
    self.voxres3 = VoxResModule()
    self.bnorm3 = L.BatchNormalization(64)
    self.conv4 = L.ConvolutionND(
        3, 64, 64, 3, stride=2, pad=1, initialW=init)
    self.voxres5 = VoxResModule()
    self.voxres6 = VoxResModule()
    self.bnorm6 = L.BatchNormalization(64)
    self.conv7 = L.ConvolutionND(
        3, 64, 64, 3, stride=2, pad=1, initialW=init)
    self.voxres8 = VoxResModule()
    self.voxres9 = VoxResModule()
    self.c1deconv = L.DeconvolutionND(
        3, 32, 32, 3, pad=1, initialW=init)
    self.c1conv = L.ConvolutionND(
        3, 32, n_classes, 3, pad=1, initialW=init)
    self.c2deconv = L.DeconvolutionND(
        3, 64, 64, 4, stride=2, pad=1, initialW=init)
    self.c2conv = L.ConvolutionND(
        3, 64, n_classes, 3, pad=1, initialW=init)
    self.c3deconv = L.DeconvolutionND(
        3, 64, 64, 6, stride=4, pad=1, initialW=init)
    self.c3conv = L.ConvolutionND(
        3, 64, n_classes, 3, pad=1, initialW=init)
    self.c4deconv = L.DeconvolutionND(
        3, 64, 64, 10, stride=8, pad=1, initialW=init)
    self.c4conv = L.ConvolutionND(
        3, 64, n_classes, 3, pad=1, initialW=init)

def call(self, x, train=False):
print(x.shape, '-------begin------------')
"""
calculate output of VoxResNet given input x

Parameters
----------
x : (batch_size, in_channels, xlen, ylen, zlen) ndarray
    image to perform semantic segmentation

Returns
-------
proba: (batch_size, n_classes, xlen, ylen, zlen) ndarray
    probability of each voxel belonging each class
    elif train=True, returns list of logits
"""
with chainer.using_config("train", train):
    h = self.conv1a(x)
    h = F.relu(self.bnorm1a(h))
    h = self.conv1b(h)
    c1 = F.clipped_relu(self.c1deconv(h))
    c1 = self.c1conv(c1)

    h = F.relu(self.bnorm1b(h))
    h = self.conv1c(h)
    h = self.voxres2(h)
    h = self.voxres3(h)
    c2 = F.clipped_relu(self.c2deconv(h))
    c2 = self.c2conv(c2)

    h = F.relu(self.bnorm3(h))
    h = self.conv4(h)
    h = self.voxres5(h)
    h = self.voxres6(h)
    c3 = F.clipped_relu(self.c3deconv(h))
    c3 = self.c3conv(c3)

    h = F.relu(self.bnorm6(h))
    h = self.conv7(h)
    h = self.voxres8(h)
    h = self.voxres9(h)
    c4 = F.clipped_relu(self.c4deconv(h))
    c4 = self.c4conv(c4)

    c = c1 + c2 + c3 + c4

if train:
    return [c1, c2, c3, c4, c]
else:
    return F.softmax(c)

import hiddenlayer as hl
input = torch.zeros([1, 3, 64, 64, 64])

model = VoxResNet()
hl.build_graph(model, (input))

Assertation error running example pytorch notebooks

Hi,

I get the following error when I try and run your example pytorch notebooks:

~/code/development_tools/anaconda3/envs/fastai/lib/python3.7/site-packages/torch/onnx/symbolic.py in adaptive_avg_pool2d(g, input, output_size)
    598 @parse_args('v', 'is')
    599 def adaptive_avg_pool2d(g, input, output_size):
--> 600     assert output_size == [1, 1], "Only output_size=[1, 1] is supported"
    601     return g.op("GlobalAveragePool", input)
    602 

AssertionError: Only output_size=[1, 1] is supported

I haven't modified anything, and I wonder if you have any idea what the source of this error is?

Regards,

Alex

hiddenlayer cannot identify that my module is indeed a torch.nn.Module

I have a personalised model class called let's say MyNet(Net), and it inherits from Net(nn.Module).

When I call hl.build_graph(model, ...), hiddenlayer then raises the exception:

  • ValueError: model input param must be a PyTorch, TensorFlow, or Keras-with-TensorFlow-backend model.

When I put everything inside only one class it works...

Possible to use with pytorch model generated by fastai?

Is it possible to use this on the models generated by fastai? If I run it on the pytorch model stored by the learner object, I get an error that it isn't a valid model.

ValueError: `model` input param must be a PyTorch, TensorFlow, or Keras-with-TensorFlow-backend model.

Problem with output size using FoldDuplicates.

Hello @waleedka !

I have just noticed, that when I use FoldDuplicates, I get wrong output size (On the graph the output size after the first module is shown, instead of the output size of the last module of the folded duplicates.)

Error while building graph for 3D ResNet

I am getting this error while building graph for 3D ResNet:
hl.build_graph(model,torch.zeros([1,3,16,112,12]))

RuntimeError: invalid argument 2: input image (T: 1 H: 4 W: 1) smaller than kernel size (kT: 1 kH: 4 kW: 4) at /opt/conda/conda-bld/pytorch_1549635019666/work/aten/src/THCUNN/generic/VolumetricAveragePooling.cu:57

When I want to visualize the efficientNet, a runtime error occurs.

import json
from PIL import Image
import torch
from torchvision import transforms
from torchviz import make_dot
import hiddenlayer as hl

from efficientnet_pytorch import EfficientNet
model = EfficientNet.from_pretrained('efficientnet-b3')
model.eval()
hl.build_graph(model, torch.zeros([1, 3, 224, 224]))
hl_graph

Traceback (most recent call last):
File "example.py", line 30, in
hl.build_graph(model, torch.zeros([1, 3, 224, 224]))
File "/home/tianhui/hiddenlayer/hiddenlayer/graph.py", line 143, in build_graph
import_graph(g, model, args)
File "/home/tianhui/hiddenlayer/hiddenlayer/pytorch_builder.py", line 71, in import_graph
torch.onnx._optimize_trace(trace, torch.onnx.OperatorExportTypes.ONNX)
File "/home/lanqiang/anaconda3/lib/python3.6/site-packages/torch/onnx/init.py", line 40, in _optimize_trace
trace.set_graph(utils._optimize_graph(trace.graph(), operator_export_type))
File "/home/lanqiang/anaconda3/lib/python3.6/site-packages/torch/onnx/utils.py", line 188, in _optimize_graph
graph = torch._C._jit_pass_onnx(graph, operator_export_type)
File "/home/lanqiang/anaconda3/lib/python3.6/site-packages/torch/onnx/init.py", line 50, in _run_symbolic_function
return utils._run_symbolic_function(*args, **kwargs)
File "/home/lanqiang/anaconda3/lib/python3.6/site-packages/torch/onnx/utils.py", line 589, in _run_symbolic_function
return fn(g, *inputs, **attrs)
File "/home/lanqiang/anaconda3/lib/python3.6/site-packages/torch/onnx/symbolic.py", line 130, in wrapper
args = [_parse_arg(arg, arg_desc) for arg, arg_desc in zip(args, arg_descriptors)]
File "/home/lanqiang/anaconda3/lib/python3.6/site-packages/torch/onnx/symbolic.py", line 130, in
args = [_parse_arg(arg, arg_desc) for arg, arg_desc in zip(args, arg_descriptors)]
File "/home/lanqiang/anaconda3/lib/python3.6/site-packages/torch/onnx/symbolic.py", line 90, in _parse_arg
raise RuntimeError("Failed to export an ONNX attribute, "
RuntimeError: Failed to export an ONNX attribute, since it's not constant, please try to make things (e.g., kernel size) static if possible

TypeError: zeros(): argument 'out' (position 2) must be Tensor, not list

I want to visualizer a model with 3-input and have problem with feed 3-input. This model have three input, this filgure:
image

I conferenced this ideal of Waleed (https://github.com/waleedka)and I visualier 3 patch with 3 input [ 32,6, 25, 25] [ 32,6, 51, 51],[32, 6, 75, 75] with code line:

hl.build_graph(model, torch.zeros([32,6, 25, 25], [32,6, 51, 51],[32, 6, 75, 75]))

  1. But my code had the error: TypeError: zeros(): argument 'out' (position 2) must be Tensor, not list.
    How do I fix this problem? (I also tried many ways, example:
    #hl.build_graph(net, torch.zeros( 32,6, 25, 25), torch.zeros(32, 6, 51, 51), torch.zeros(32, 6, 75, 75))
    #hl.build_graph(net, torch.zeros(( 32,6, 25, 25), ( 32,6, 51, 51), ( 32,6, 75, 75)))
    #hl.build_graph(net, torch.zeros([( 32,6, 25, 25)],[( 32,6, 51, 51)],[( 32,6, 75, 75)])
    )

In addition, I successed with the first path with code line:
hl.build_graph(model, torch.zeros([32,6, 25, 25]). And had figure:
image.....................
2) And question more about window size of model visualize. Can we show full model visualize one window or save it ? as we see these below picture, I must to stride many time in oder to take a my model.

image
image

Thank you.

fail to run pytest

After building hiddenlayer, pytest -v failed at:

tests/test_pytorch_graph.py::TestPytorchGraph::test_graph [1]    29548 segmentation fault (core dumped)  pytest -v

Any idea? Or should I ignore it anyway?

import hiddenlayer Segmentation fault (core dumped)

I tried to install hiddenlayer using pip and install it from source code, respectively. But when i import hiddenlayer, i met an error, wich is Segmentation fault (core dumped).

Do you have any idea about this bug?

What is number input for visulizer model?

I run this code in Jupyter Notebook,but one error occurs:
`import torch
import torchvision.models
import hidden layer as hl
model = torchvision.models.vgg16()
hl.build_graph(model, torch.zeros([1, 3, 224, 224]))'

And now, I want to feed three inputs are [ 32,6, 25, 25] [ 32,6, 51, 51],[32, 6, 75, 75]. I wrote : hl.build_graph(net, torch.zeros(
[ 32,6, 25, 25],
[ 32,6, 51, 51],
[32, 6, 75, 75]
)

But my code had the error: TypeError: zeros(): argument 'out' (position 2) must be Tensor, not list.
How do I fix this problem? (I also tried many ways, ex:
#hl.build_graph(net, torch.zeros( 32,6, 25, 25), torch.zeros(32, 6, 51, 51), torch.zeros(32, 6, 75, 75))
#hl.build_graph(net, torch.zeros(( 32,6, 25, 25), ( 32,6, 51, 51), ( 32,6, 75, 75)))
#hl.build_graph(net, torch.zeros([( 32,6, 25, 25)],[( 32,6, 51, 51)],[( 32,6, 75, 75)])
)

Thank you.

module 'torch.onnx' has no attribute 'OperatorExportTypes'

I run this code in Jupyter Notebook,but one error occurs:
`
import torch

import torchvision.models

import hiddenlayer as hl

model = torchvision.models.vgg16()

hl.build_graph(model, torch.zeros([1, 3, 224, 224]))

`

AttributeError: module 'torch.onnx' has no attribute 'OperatorExportTypes'

And I run the code under Ubuntu16.04, pytorch 0.4.0

How to fold when constant is added ?

Hello !
How can I write a proper folding rule for this ?

image

I don't know how to remove the "Constant" by including it in a folding since it has no parent...

passing model args problem

Hi!
I have a network using convolutionnal and fully connected layers (nothing very new indeed :) ) in torch

img_channels = 5
img_H = 64
img_W = 64
n_batchs = 1
model = Net(img_channels)
imgs = torch.zeros([n_batchs, img_channels,img_H ,img_W])
reds = torch.zeros([n_batchs,1])

When I call it
model(imgs,reds)
I get the correct layer tensor sahpe
input shape: torch.Size([1, 5, 64, 64])
conv0 shape: torch.Size([1, 64, 64, 64])
conv0p shape: torch.Size([1, 64, 32, 32])

i0:START <<<<<<<
Inception x_s1_0 : torch.Size([1, 48, 32, 32])
Inception x_s2_0 : torch.Size([1, 64, 32, 32])
Inception x_s1_2 : torch.Size([1, 48, 32, 32])
Inception x_pool0 : torch.Size([1, 48, 32, 32])
Inception x_s1_1 : torch.Size([1, 48, 32, 32])
Inception x_s2_1 : torch.Size([1, 64, 32, 32])
Inception x_s2_2 : torch.Size([1, 64, 32, 32])
Inception output : torch.Size([1, 240, 32, 32])
i1:START <<<<<<<
Inception x_s1_0 : torch.Size([1, 64, 32, 32])
Inception x_s2_0 : torch.Size([1, 92, 32, 32])
Inception x_s1_2 : torch.Size([1, 64, 32, 32])
Inception x_pool0 : torch.Size([1, 64, 32, 32])
Inception x_s1_1 : torch.Size([1, 64, 32, 32])
Inception x_s2_1 : torch.Size([1, 92, 32, 32])
Inception x_s2_2 : torch.Size([1, 92, 32, 32])
Inception output : torch.Size([1, 340, 32, 32])
i1p shape: torch.Size([1, 340, 16, 16])
i2:START <<<<<<<
Inception x_s1_0 : torch.Size([1, 92, 16, 16])
Inception x_s2_0 : torch.Size([1, 128, 16, 16])
Inception x_s1_2 : torch.Size([1, 92, 16, 16])
Inception x_pool0 : torch.Size([1, 92, 16, 16])
Inception x_s1_1 : torch.Size([1, 92, 16, 16])
Inception x_s2_1 : torch.Size([1, 128, 16, 16])
Inception x_s2_2 : torch.Size([1, 128, 16, 16])
Inception output : torch.Size([1, 476, 16, 16])
i3:START <<<<<<<
Inception x_s1_0 : torch.Size([1, 92, 16, 16])
Inception x_s2_0 : torch.Size([1, 128, 16, 16])
Inception x_s1_2 : torch.Size([1, 92, 16, 16])
Inception x_pool0 : torch.Size([1, 92, 16, 16])
Inception x_s1_1 : torch.Size([1, 92, 16, 16])
Inception x_s2_1 : torch.Size([1, 128, 16, 16])
Inception x_s2_2 : torch.Size([1, 128, 16, 16])
Inception output : torch.Size([1, 476, 16, 16])
i3p shape: torch.Size([1, 476, 8, 8])
i4:START <<<<<<<
Inception x_s1_0 : torch.Size([1, 92, 8, 8])
Inception x_s2_0 : torch.Size([1, 128, 8, 8])
Inception x_s1_2 : torch.Size([1, 92, 8, 8])
Inception x_pool0 : torch.Size([1, 92, 8, 8])
Inception x_s2_2 : torch.Size([1, 128, 8, 8])
Inception output : torch.Size([1, 348, 8, 8])
FC part :START <<<<<<<
flat shape: torch.Size([1, 22272])
concat shape: torch.Size([1, 22273])
fcn_in_features: 22273
fc0 shape: torch.Size([1, 1096])
fc1 shape: torch.Size([1, 1096])
fc2 shape: torch.Size([1, 180])
output shape: torch.Size([1, 180])

But, when I call
hl.build_graph(model,(imgs,reds))
then HL complains
TypeError: forward() takes 2 positional arguments but 3 were given

Any idea?
JE
PS: torch 1.1.0 and a fresh HL install on my mac

BUG report: the stable version on pip has a bug

Hi there~

Your stable version on pip has a bug.

line 324 in graph.py
dot.attr("edge", style="doted",

would cause Error:
Warning: gvrender_set_style: unsupported style doted - ignoring

which was terribly annoying. And caused me lots of time debugging.

We found that you spell the dotted wrongly..., it should be "dotted" instead of "doted". And we also found that you had updated the code into "solid" on github...

We hope you guys update the pip package as soon as possible because the bug is really too funny ^_^.

shape error

Hi, I met a problem when I want to visualize the darknet:

device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print("device = ", device)
model = Darknet("yolo/cfg/yolov3.cfg").to(device)
hl.build_graph(model, torch.zeros([1, 3, 224, 224]).to(device))
RuntimeError: shape '[1, 255, 841]' is invalid for input of size 199920

Thanks.

adaptive_avg_pool2d does not exist

Hi, I met this problem when I want to visualize densenet, error shows below:
c:\python35\lib\site-packages\torch\onnx\utils.py:446: UserWarning: ONNX export failed on ATen operator adaptive_avg_pool2d because torch.onnx.symbolic.adaptive_avg_pool2d does not exist .format(op_name, op_name))
Actually, there is no adaptive_avg_pool2d in my densenet, only nn.AvgPool2d() exists.

Support dict input?

I got a runtime error when I tried to plot a model graph whose input x actually is a dict.

import hiddenlayer as hl

x = getBuff(0)
hl.build_graph(net, x)

So I'm wondering can this plot tool support dict input.

run time error on yolov3

File "getGraph.py", line 13, in
graph = hl.build_graph(model, torch.zeros([1,3, 416, 416])).to(device)
File "/home/tbaggu/Desktop/20-net-works/hiddenlayer/hiddenlayer/graph.py", line 143, in build_graph
import_graph(g, model, args)
File "/home/tbaggu/Desktop/20-net-works/hiddenlayer/hiddenlayer/pytorch_builder.py", line 70, in import_graph
trace, out = torch.jit.get_trace_graph(model, args)
File "/usr/local/lib/python3.5/dist-packages/torch/jit/init.py", line 77, in get_trace_graph
return LegacyTracedModule(f)(*args, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 477, in call
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/torch/jit/init.py", line 109, in forward
out = self.inner(*trace_inputs)
File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 475, in call
result = self._slow_forward(*input, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 465, in _slow_forward
result = self.forward(*input, **kwargs)
File "/home/tbaggu/Desktop/20-net-works/yolov3/models.py", line 208, in forward
x = module[0](x, img_size)
File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 475, in call
result = self._slow_forward(*input, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 465, in _slow_forward
result = self.forward(*input, **kwargs)
File "/home/tbaggu/Desktop/20-net-works/yolov3/models.py", line 127, in forward
create_grids(self, img_size, (nx, ny), p.device)
File "/home/tbaggu/Desktop/20-net-works/yolov3/models.py", line 253, in create_grids
self.anchor_vec = self.anchors.to(device) / self.stride
RuntimeError: Expected object of type torch.FloatTensor but found type torch.LongTensor for argument #2 'other

Network is taken from
https://github.com/ultralytics/yolov3/blob/master/train.py

Clusters convolutions with different kernels

I use the newest version bd0d36a. I have a problem that it combines two different convolutions (in this case conv 3x3 > relu and conv 1x1 > relu) in one, incorrect block (conv 3x3 > relu x2).

import torch
from torch import nn
import hiddenlayer as hl

model = nn.Sequential(
          nn.Conv2d(8, 8, 3, padding=1),
          nn.ReLU(),
          nn.Conv2d(8, 8, 1),
          nn.ReLU(),
          nn.MaxPool2d(2, 2))

hl.build_graph(model, torch.zeros([1, 8, 32, 32]))

screenshot 2019-01-19 07 54 29

When there are different activations, the problem disappears:

model = nn.Sequential(
          nn.Conv2d(8, 8, 3, padding=1),
          nn.ReLU(),
          nn.Conv2d(8, 8, 1),
          nn.MaxPool2d(2, 2))

screenshot 2019-01-19 07 55 15

Side notes

  • It clusters ReLU with operations (which I like a lot, otherwise there is too much of visual noise), but not other activation functions (sigmoid, tanh, etc); is there some rationale for that? (My preference would be to cluster all reasonable activation functions).
  • In general, I am very enthusiastic about neural network visualizations, see my overview: Simple diagrams of convoluted neural networks; I did mine, but for a limited case od purely sequential networks in Keras: https://github.com/stared/keras-sequential-ascii,

Pointer Network - 2 Inputs issue

When calling:

hl_graph = hl.build_graph(pointer, (inputs, target))

with inputs: tensor([[3, 7, 2, 9, 0, 1, 8, 5, 4, 6]])
and target: tensor([[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]])
I get the following error:

/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _slow_forward(self, *input, **kwargs)
    463         tracing_state._traced_module_stack.append(self)
    464         try:
--> 465             result = self.forward(*input, **kwargs)
    466         finally:
    467             tracing_state.pop_scope()

<ipython-input-217-844f2c5b2d6a> in forward(self, inputs, target)
     80 
     81 
---> 82         return loss / seq_len

RuntimeError: Expected object of type torch.FloatTensor but found type torch.LongTensor for argument #2 'other'

And PointerNetwork is:

class PointerNet(nn.Module):
    def __init__(self, 
            embedding_size,
            hidden_size,
            seq_len,
            n_glimpses,
            tanh_exploration,
            use_tanh,
            use_cuda=USE_CUDA):
        super(PointerNet, self).__init__()
        
        self.embedding_size = embedding_size
        self.hidden_size    = hidden_size
        self.n_glimpses     = n_glimpses
        self.seq_len        = seq_len
        self.use_cuda       = use_cuda
                
        self.embedding = nn.Embedding(seq_len, embedding_size)
        self.encoder = nn.LSTM(embedding_size, hidden_size, batch_first=True)
        self.decoder = nn.LSTM(embedding_size, hidden_size, batch_first=True)
        self.pointer = Attention(hidden_size, use_tanh=use_tanh, C=tanh_exploration, use_cuda=use_cuda)
        self.glimpse = Attention(hidden_size, use_tanh=False, use_cuda=use_cuda)
        
        self.decoder_start_input = nn.Parameter(torch.FloatTensor(embedding_size))
        self.decoder_start_input.data.uniform_(-(1. / math.sqrt(embedding_size)), 1. / math.sqrt(embedding_size))
        
        self.criterion = nn.CrossEntropyLoss()
        
    def apply_mask_to_logits(self, logits, mask, idxs): 
        batch_size = logits.size(0)
        clone_mask = mask.clone()

        if idxs is not None:
            clone_mask[[i for i in range(batch_size)], idxs.data] = 1
            logits[clone_mask] = -np.inf
        return logits, clone_mask
            
    def forward(self, inputs, target):
        """
        Args: 
            inputs: [batch_size x sourceL]
        """
        batch_size = inputs.size(0)
        seq_len    = inputs.size(1)
        assert seq_len == self.seq_len
        
        embedded = self.embedding(inputs)
        target_embedded = self.embedding(target)
        encoder_outputs, (hidden, context) = self.encoder(embedded)
        
        mask = torch.zeros(batch_size, seq_len).byte()
        if self.use_cuda:
            mask = mask.cuda()
            
        idxs = None
       
        decoder_input = self.decoder_start_input.unsqueeze(0).repeat(batch_size, 1)
        
        loss = 0
        
        for i in range(seq_len):
            
            
            _, (hidden, context) = self.decoder(decoder_input.unsqueeze(1), (hidden, context))
            
            query = hidden.squeeze(0)
            for i in range(self.n_glimpses):
                ref, logits = self.glimpse(query, encoder_outputs)
                logits, mask = self.apply_mask_to_logits(logits, mask, idxs)
                query = torch.bmm(ref, F.softmax(logits).unsqueeze(2)).squeeze(2) 
                
                
            _, logits = self.pointer(query, encoder_outputs)
            logits, mask = self.apply_mask_to_logits(logits, mask, idxs)
            
            decoder_input = target_embedded[:,i,:]
            
            loss += self.criterion(logits, target[:,i])
            
            
        return loss / seq_len

Didn't show strides for Conv in Pytorch 1.1

{'dilations': [1, 1], 'group': 1, 'kernel_shape': [5, 5], 'pads': [2, 2, 2, 2], 'strides': [2, 2]}
the pytorch params show 'strides', (with a s at the end)

code:
if "stride" in self.params: stride = self.params["stride"]
has no 's' at the end, so the graph title will not have the stride info

pip nor conda cannot install hiddenlayer

pip 18.0
conda 4.6.11
ubuntu 16.04
nv-docker2

conda install graphviz python-graphviz

conda list

# Name                    Version                   Build  Channel
_tflow_select             2.1.0                       gpu  
absl-py                   0.7.1                    py36_0    conda-forge
asn1crypto                0.24.0                py36_1003    conda-forge
astor                     0.7.1                      py_0    conda-forge
backcall                  0.1.0                      py_0    conda-forge
binutils_impl_linux-64    2.31.1               h6176602_1    conda-forge
binutils_linux-64         2.31.1               h6176602_3    conda-forge
blas                      1.0                         mkl  
bleach                    2.1.4                      py_1    conda-forge
blinker                   1.4                        py_1    conda-forge
boto                      2.49.0                   py36_0  
boto3                     1.9.53                     py_0    conda-forge
botocore                  1.12.54                    py_0    conda-forge
bz2file                   0.98                       py_0    conda-forge
bzip2                     1.0.6                h470a237_2    conda-forge
c-ares                    1.15.0            h14c3975_1001    conda-forge
ca-certificates           2019.3.9             hecc5488_0    conda-forge
cairo                     1.16.0            ha4e643d_1000    conda-forge
certifi                   2019.3.9                 py36_0    conda-forge
cffi                      1.11.5           py36h5e8e0c9_1    conda-forge
chardet                   3.0.4                 py36_1003    conda-forge
cryptography              2.6.1            py36h9d9f1b6_0    conda-forge
cryptography-vectors      2.3.1                 py36_1000    conda-forge
cudatoolkit               9.0                  h13b8566_0  
cudnn                     7.3.1                 cuda9.0_0  
cupti                     9.0.176                       0  
curl                      7.64.0               h646f8bb_2    conda-forge
cycler                    0.10.0                     py_1    conda-forge
dbus                      1.13.0               h3a4f0e9_0    conda-forge
decorator                 4.3.0                      py_0    conda-forge
docutils                  0.14                  py36_1001    conda-forge
entrypoints               0.2.3                    py36_2    conda-forge
expat                     2.2.5                hfc679d8_2    conda-forge
fontconfig                2.13.1               h65d0f4c_0    conda-forge
freetype                  2.9.1                h6debe1e_4    conda-forge
gast                      0.2.2                      py_0    conda-forge
gcc_impl_linux-64         7.3.0                habb00fd_1    conda-forge
gcc_linux-64              7.3.0                h553295d_3    conda-forge
gensim                    3.5.0                    py36_0    conda-forge
gettext                   0.19.8.1             h5e8e0c9_1    conda-forge
git                       2.21.0          pl526h2882143_0    conda-forge
glib                      2.58.3            hf63aee3_1001    conda-forge
gmp                       6.1.2                hfc679d8_0    conda-forge
graphite2                 1.3.13            hf484d3e_1000    conda-forge
graphviz                  2.40.1               h0dab3d1_0    conda-forge
grpcio                    1.16.1           py36hf8bcb03_1  
gst-plugins-base          1.14.4            hdf3bae2_1001    conda-forge
gstreamer                 1.14.4            h66beb1c_1001    conda-forge
gxx_impl_linux-64         7.3.0                hdf63c60_1    conda-forge
gxx_linux-64              7.3.0                h553295d_3    conda-forge
h5py                      2.9.0           nompi_py36hf008753_1102    conda-forge
harfbuzz                  2.3.1                h6824563_0    conda-forge
hdf5                      1.10.4          nompi_h11e915b_1105    conda-forge
html5lib                  1.0.1                      py_0    conda-forge
icu                       58.2                 hfc679d8_0    conda-forge
idna                      2.7                   py36_1002    conda-forge
intel-openmp              2019.1                      144  
ipykernel                 5.0.0              pyh24bf2e0_1    conda-forge
ipython                   7.0.1            py36h24bf2e0_0    conda-forge
ipython_genutils          0.2.0                      py_1    conda-forge
ipywidgets                7.4.2                      py_0    conda-forge
jedi                      0.12.1                   py36_0    conda-forge
jinja2                    2.10                       py_1    conda-forge
jmespath                  0.9.3                      py_1    conda-forge
jpeg                      9c                   h470a237_1    conda-forge
jsonschema                2.6.0                    py36_2    conda-forge
jupyter                   1.0.0                      py_1    conda-forge
jupyter_client            5.2.3                      py_1    conda-forge
jupyter_console           5.1.0                    py36_0    conda-forge
jupyter_core              4.4.0                      py_0    conda-forge
jupyterlab                0.34.12                  py36_0    conda-forge
jupyterlab_launcher       0.13.1                     py_2    conda-forge
keras                     2.1.6                    py36_0    conda-forge
keras-applications        1.0.7                      py_0    conda-forge
keras-preprocessing       1.0.9                      py_0    conda-forge
kiwisolver                1.0.1            py36h2d50403_2    conda-forge
krb5                      1.16.3            h05b26f9_1001    conda-forge
libcurl                   7.64.0               h541490c_2    conda-forge
libedit                   3.1.20170329         haf1bffa_1    conda-forge
libffi                    3.2.1                hfc679d8_5    conda-forge
libgcc-ng                 7.3.0                hdf63c60_0    conda-forge
libgfortran               3.0.0                         1    conda-forge
libgfortran-ng            7.2.0                hdf63c60_3    conda-forge
libgpuarray               0.7.6             h14c3975_1003    conda-forge
libiconv                  1.15                 h470a237_3    conda-forge
libpng                    1.6.35               ha92aebf_2    conda-forge
libprotobuf               3.7.0                h8b12597_2    conda-forge
libsodium                 1.0.16               h470a237_1    conda-forge
libssh2                   1.8.0             h90d6eec_1004    conda-forge
libstdcxx-ng              7.3.0                hdf63c60_0    conda-forge
libtiff                   4.0.9                he6b73bb_2    conda-forge
libtool                   2.4.6             h14c3975_1002    conda-forge
libuuid                   2.32.1               h470a237_2    conda-forge
libxcb                    1.13                 h470a237_2    conda-forge
libxml2                   2.9.8                h422b904_5    conda-forge
mako                      1.0.7                      py_1    conda-forge
markdown                  2.6.11                     py_0    conda-forge
markupsafe                1.0              py36h470a237_1    conda-forge
matplotlib                3.0.3                    py36_0    conda-forge
matplotlib-base           3.0.3            py36h167e16e_0    conda-forge
mistune                   0.8.3            py36h470a237_2    conda-forge
mkl                       2018.0.3                      1  
mkl_fft                   1.0.10                   py36_0    conda-forge
mkl_random                1.0.2                    py36_0    conda-forge
nb_conda_kernels          2.1.1                    py36_1    conda-forge
nbconvert                 5.3.1                      py_1    conda-forge
nbformat                  4.4.0                      py_1    conda-forge
ncurses                   6.1                  hfc679d8_1    conda-forge
ninja                     1.8.2                h2d50403_1    conda-forge
nltk                      3.2.5                      py_0    conda-forge
notebook                  5.7.0                    py36_0    conda-forge
numpy                     1.15.0           py36h1b885b7_0  
numpy-base                1.15.0           py36h3dfced4_0  
oauthlib                  2.1.0                      py_0    conda-forge
olefile                   0.46                       py_0    conda-forge
openblas                  0.2.20                        8    conda-forge
openssl                   1.1.1b               h14c3975_1    conda-forge
pandas                    0.23.4           py36hf8a1672_0    conda-forge
pandoc                    2.3.1                         0    conda-forge
pandocfilters             1.4.2                      py_1    conda-forge
pango                     1.40.14           h4ea9474_1004    conda-forge
parso                     0.3.1                      py_0    conda-forge
pcre                      8.41                 hfc679d8_3    conda-forge
perl                      5.26.2               h470a237_0    conda-forge
pexpect                   4.6.0                    py36_0    conda-forge
pickleshare               0.7.5                    py36_0    conda-forge
pillow                    5.3.0            py36hc736899_0    conda-forge
pip                       18.0                     py36_1    conda-forge
pixman                    0.34.0            h14c3975_1003    conda-forge
prometheus_client         0.3.1                      py_1    conda-forge
prompt_toolkit            2.0.5                      py_0    conda-forge
protobuf                  3.7.0            py36he1b5a44_1    conda-forge
pthread-stubs             0.4                  h470a237_1    conda-forge
ptyprocess                0.6.0                 py36_1000    conda-forge
pycparser                 2.19                       py_0    conda-forge
pygments                  2.2.0                      py_1    conda-forge
pygpu                     0.7.6           py36h3010b51_1000    conda-forge
pyjwt                     1.6.4                      py_0    conda-forge
pyopenssl                 18.0.0                py36_1000    conda-forge
pyparsing                 2.3.0                      py_0    conda-forge
pyqt                      4.11.4                   py36_3    conda-forge
pysocks                   1.6.8                 py36_1002    conda-forge
python                    3.6.7                h0371630_0  
python-crfsuite           0.9.6            py36h2d50403_0    conda-forge
python-dateutil           2.7.3                      py_0    conda-forge
python-graphviz           0.10.1                     py_0    conda-forge
pytorch                   1.0.1           py3.6_cuda9.0.176_cudnn7.4.2_0    pytorch
pytz                      2018.5                     py_0    conda-forge
pyyaml                    5.1              py36h14c3975_0    conda-forge
pyzmq                     17.1.2           py36hae99301_0    conda-forge
qt                        4.8.7                         2  
qtconsole                 4.4.1                    py36_1    conda-forge
readline                  7.0                  haf1bffa_1    conda-forge
requests                  2.20.1                py36_1000    conda-forge
requests-oauthlib         1.0.0                      py_1    conda-forge
s3transfer                0.1.13                py36_1001    conda-forge
scikit-learn              0.19.1           py36hedc7406_0  
scipy                     1.1.0            py36hc49cb51_0  
send2trash                1.5.0                      py_0    conda-forge
setuptools                40.4.0                py36_1000    conda-forge
simplegeneric             0.8.1                      py_1    conda-forge
sip                       4.18                     py36_1    conda-forge
six                       1.11.0                   py36_1    conda-forge
smart_open                1.7.1                      py_0    conda-forge
sqlite                    3.25.2               hb1c47c0_0    conda-forge
tensorboard               1.12.0                py36_1000    conda-forge
tensorboardx              1.6                        py_0    conda-forge
tensorflow                1.12.0          gpu_py36he68c306_0  
tensorflow-base           1.12.0          gpu_py36h8e0ae2d_0  
tensorflow-gpu            1.12.0               h0d30ee6_0  
termcolor                 1.1.0                      py_2    conda-forge
terminado                 0.8.1                    py36_1    conda-forge
testpath                  0.4.1                    py36_0    conda-forge
theano                    1.0.4           py36hf484d3e_1000    conda-forge
tk                        8.6.9                ha92aebf_0    conda-forge
torchvision               0.2.1                      py_2    pytorch
tornado                   5.1.1            py36h470a237_0    conda-forge
traitlets                 4.3.2                    py36_0    conda-forge
twython                   3.7.0                      py_0    conda-forge
urllib3                   1.23                  py36_1001    conda-forge
wcwidth                   0.1.7                      py_1    conda-forge
webencodings              0.5.1                      py_1    conda-forge
werkzeug                  0.14.1                     py_0    conda-forge
wheel                     0.31.1                py36_1001    conda-forge
widgetsnbextension        3.4.2                    py36_0    conda-forge
xorg-kbproto              1.0.7             h14c3975_1002    conda-forge
xorg-libice               1.0.9             h516909a_1004    conda-forge
xorg-libsm                1.2.3             h84519dc_1000    conda-forge
xorg-libx11               1.6.7             h14c3975_1000    conda-forge
xorg-libxau               1.0.8                h470a237_6    conda-forge
xorg-libxdmcp             1.1.2                h470a237_7    conda-forge
xorg-libxext              1.3.4                h516909a_0    conda-forge
xorg-libxpm               3.5.12            h14c3975_1002    conda-forge
xorg-libxrender           0.9.10            h516909a_1002    conda-forge
xorg-libxt                1.1.5             h14c3975_1002    conda-forge
xorg-renderproto          0.11.1            h14c3975_1002    conda-forge
xorg-xextproto            7.3.0             h14c3975_1002    conda-forge
xorg-xproto               7.0.31            h14c3975_1007    conda-forge
xz                        5.2.4                h470a237_1    conda-forge
yaml                      0.1.7             h14c3975_1001    conda-forge
zeromq                    4.2.5                hfc679d8_6    conda-forge
zlib                      1.2.11               h470a237_3    conda-forge
(dl) root@dab3b36995ff:/workhere/hiddenlayer# 

pip install hiddenlayer
or
clone from github (as shown in the README.md, developer mode)
both results in error:

Exception:
Traceback (most recent call last):
  File "/root/anaconda3/envs/dl/lib/python3.6/site-packages/pip/_vendor/pkg_resources/__init__.py", line 2869, in _dep_map
    return self.__dep_map
  File "/root/anaconda3/envs/dl/lib/python3.6/site-packages/pip/_vendor/pkg_resources/__init__.py", line 2663, in __getattr__
    raise AttributeError(attr)
AttributeError: _DistInfoDistribution__dep_map

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/root/anaconda3/envs/dl/lib/python3.6/site-packages/pip/_vendor/packaging/requirements.py", line 93, in __init__
    req = REQUIREMENT.parseString(requirement_string)
  File "/root/anaconda3/envs/dl/lib/python3.6/site-packages/pip/_vendor/pyparsing.py", line 1632, in parseString
    raise exc
  File "/root/anaconda3/envs/dl/lib/python3.6/site-packages/pip/_vendor/pyparsing.py", line 1622, in parseString
    loc, tokens = self._parse( instring, 0 )
  File "/root/anaconda3/envs/dl/lib/python3.6/site-packages/pip/_vendor/pyparsing.py", line 1379, in _parseNoCache
    loc,tokens = self.parseImpl( instring, preloc, doActions )
  File "/root/anaconda3/envs/dl/lib/python3.6/site-packages/pip/_vendor/pyparsing.py", line 3395, in parseImpl
    loc, exprtokens = e._parse( instring, loc, doActions )
  File "/root/anaconda3/envs/dl/lib/python3.6/site-packages/pip/_vendor/pyparsing.py", line 1383, in _parseNoCache
    loc,tokens = self.parseImpl( instring, preloc, doActions )
  File "/root/anaconda3/envs/dl/lib/python3.6/site-packages/pip/_vendor/pyparsing.py", line 3183, in parseImpl
    raise ParseException(instring, loc, self.errmsg, self)
pip._vendor.pyparsing.ParseException: Expected stringEnd (at char 33), (line:1, col:34)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/root/anaconda3/envs/dl/lib/python3.6/site-packages/pip/_vendor/pkg_resources/__init__.py", line 2949, in __init__
    super(Requirement, self).__init__(requirement_string)
  File "/root/anaconda3/envs/dl/lib/python3.6/site-packages/pip/_vendor/packaging/requirements.py", line 97, in __init__
    requirement_string[e.loc:e.loc + 8]))
pip._vendor.packaging.requirements.InvalidRequirement: Invalid requirement, parse error at "'; extra '"

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/root/anaconda3/envs/dl/lib/python3.6/site-packages/pip/_internal/basecommand.py", line 141, in main
    status = self.run(options, args)
  File "/root/anaconda3/envs/dl/lib/python3.6/site-packages/pip/_internal/commands/install.py", line 330, in run
    self._warn_about_conflicts(to_install)
  File "/root/anaconda3/envs/dl/lib/python3.6/site-packages/pip/_internal/commands/install.py", line 456, in _warn_about_conflicts
    package_set, _dep_info = check_install_conflicts(to_install)
  File "/root/anaconda3/envs/dl/lib/python3.6/site-packages/pip/_internal/operations/check.py", line 98, in check_install_conflicts
    package_set = create_package_set_from_installed()
  File "/root/anaconda3/envs/dl/lib/python3.6/site-packages/pip/_internal/operations/check.py", line 41, in create_package_set_from_installed
    package_set[name] = PackageDetails(dist.version, dist.requires())
  File "/root/anaconda3/envs/dl/lib/python3.6/site-packages/pip/_vendor/pkg_resources/__init__.py", line 2607, in requires
    dm = self._dep_map
  File "/root/anaconda3/envs/dl/lib/python3.6/site-packages/pip/_vendor/pkg_resources/__init__.py", line 2871, in _dep_map
    self.__dep_map = self._compute_dependencies()
  File "/root/anaconda3/envs/dl/lib/python3.6/site-packages/pip/_vendor/pkg_resources/__init__.py", line 2881, in _compute_dependencies
    reqs.extend(parse_requirements(req))
  File "/root/anaconda3/envs/dl/lib/python3.6/site-packages/pip/_vendor/pkg_resources/__init__.py", line 2942, in parse_requirements
    yield Requirement(line)
  File "/root/anaconda3/envs/dl/lib/python3.6/site-packages/pip/_vendor/pkg_resources/__init__.py", line 2951, in __init__
    raise RequirementParseError(str(e))
pip._vendor.pkg_resources.RequirementParseError: Invalid requirement, parse error at "'; extra '"

module 'torch.onnx' has no attribute 'OperatorExportTypes'

I run this code in Jupyter Notebook,but one error occurs:
`import torch
import torchvision.models
import hiddenlayer as hl

VGG16 with BatchNorm

model = torchvision.models.vgg16()

Build HiddenLayer graph

Jupyter Notebook renders it automatically

hl.build_graph(model, torch.zeros([1, 3, 224, 224]))`

AttributeError: module 'torch.onnx' has no attribute 'OperatorExportTypes'

And I run the code under Ubuntu16.04, pytorch 0.4.0

Keras output section of graph is very complex

The output section of every Keras graph is complex, and has a long chain of TF nodes that uses up a lot of screen space.

Request some way to Prune everything downstream of the "beginning of the outputs", and rename the end block "OUTPUT". Graphs with multiple outputs might have "OUTPUT #1" etc.

Possible style misspelling

dot.attr("edge", style="doted",

Should this be dotted maybe? When I build_graph with a PyTorch model, I get

Warning: gvrender_set_style: unsupported style doted - ignoring
Warning: gvrender_set_style: unsupported style doted - ignoring
Warning: gvrender_set_style: unsupported style doted - ignoring
Warning: gvrender_set_style: unsupported style doted - ignoring
Warning: gvrender_set_style: unsupported style doted - ignoring
Warning: gvrender_set_style: unsupported style doted - ignoring
Warning: gvrender_set_style: unsupported style doted - ignoring
Warning: gvrender_set_style: unsupported style doted - ignoring
Warning: gvrender_set_style: unsupported style doted - ignoring
Warning: gvrender_set_style: unsupported style doted - ignoring
Warning: gvrender_set_style: unsupported style doted - ignoring
Warning: gvrender_set_style: unsupported style doted - ignoring
Warning: gvrender_set_style: unsupported style doted - ignoring
Warning: gvrender_set_style: unsupported style doted - ignoring
Warning: gvrender_set_style: unsupported style doted - ignoring

RNN can not display correctly

  • Here is a simple example of LSTM neural network.
    image

We need hl.transforms.Rename() to rename the RNN node.

tsfm = [hl.transforms.Rename(op='prim::PythonOp', to = 'LSTM')]

image

TypeError: 'Metric' object does not support indexing

Hi, I installed hiddenlayer and then tried to run the demo "history_canvas.py". But there is something wrong. The error message is "Step 0: loss: 1.042669959384648 accuracy: 0.06743948517530154
Traceback (most recent call last):
File "./demos/history_canvas.py", line 43, in
c.draw_plot(h["loss"], h["accuracy"])
File "/home/yangbiao/hiddenlayer-master/hiddenlayer/canvas.py", line 153, in wrapper
self.render()
File "/home/yangbiao/hiddenlayer-master/hiddenlayer/canvas.py", line 133, in render
getattr(self, method)(*c[1], **c[2])
File "/home/yangbiao/hiddenlayer-master/hiddenlayer/canvas.py", line 180, in draw_plot
label = labels[i] if labels else m.name
TypeError: 'Metric' object does not support indexing."
I used python 3.5.2 and did not use jupter note. I'm not sure whether the mistake is caused by the python version? Please give me some advice about it. Thx.

"torch._C.Value has no attribute 'uniqueName'" Error running with PyTorch 1.2

PyTorch Version: '1.2.0a'
Python: 3.6.8
Exception has occurred: AttributeError 'torch._C.Value' object has no attribute 'uniqueName' File "hiddenlayer/hiddenlayer/pytorch_builder.py", line 45, in <listcomp> return node.scopeName() + "/outputs/" + "/".join([o.uniqueName() for o in node.outputs()]) File "hiddenlayer/hiddenlayer/pytorch_builder.py", line 45, in pytorch_id return node.scopeName() + "/outputs/" + "/".join([o.uniqueName() for o in node.outputs()]) File "hiddenlayer/hiddenlayer/pytorch_builder.py", line 90, in import_graph hl_node = Node(uid=pytorch_id(torch_node), name=None, op=op, File "hiddenlayer/hiddenlayer/graph.py", line 143, in build_graph import_graph(g, model, args) File "visualizer.py", line 20, in <module> graph = hl.build_graph(model, input)

Works well with older version of PyTorch (0.4.1).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.