Coder Social home page Coder Social logo

cnn-numpy's People

Contributors

wuziheng avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cnn-numpy's Issues

训练时候的一个问题

作者您好,请问您当时用lenet_layers跑出来的结果真的可以收敛吗 为啥我跑了将近20个epoch准确度只有10%

Questions

Hi,麻烦您解释一下在base_conv.py里面卷积的反向传播代码:flip_weights = np.flipud(np.fliplr(self.weights))
这里为什么需要左右上下翻转完了再进行reshape,谢谢啦

Questions

Hi,
请问可以解释一下为什么Conv2D中:
weights_scale = math.sqrt(reduce(lambda x, y: x * y, shape) / self.output_channels),
这个weights_scale的作用和它为什么这么计算吗?

关于layers/fc.py里的一个小问题

首先,感谢作者的项目,学到了很多。
关于layers/fc.py的第42行self.bias -= alpha * self.bias,是否应该是self.bias -= alpha * self.b_gradient呢?

请假一下输入输出

想问一下输入20*20的,输出26的字母识别,为什么改了输入输出的size之后正确率这么低啊
image

pooling.py的一个bug

我还是认为,
pooling.py的79行,应该对self.index有一个清零的步骤,self.index = np.zeros(self.input_shape)
即:
forward(self, x):
out = np.zeros([x.shape[0], x.shape[1] / self.stride,
x.shape[2] / self.stride, self.output_channels])
self.index = np.zeros(self.input_shape)
for b in range(x.shape[0]):
............................
供讨论

请教一个fc.py里面梯度计算的问题

楼主你好,因为conv的反向传播时计算梯度,就是用x的转置去乘上一层传递下来的梯度,为啥在fc里面变成了用上一层梯度乘x的转置呀。
就是如下代码,在fc的27行左右,为啥楼主你写的源代码如下:
col_x = self.x[i][:, np.newaxis]
eta_i = eta[i][:, np.newaxis].T
self.w_gradient += np.dot(col_x, eta_i)
而不是这样呀:
col_x = self.x[i][np.newaxis,:].T
eta_i = eta[i][np.newaxis,:]
self.w_gradient += np.dot(col_x, eta_i)
我刚试了一下下面这种方式也是可以正常训练的。所以我不知道是我矩阵梯度算错了,还是楼主不小心写错了,或者是两种方式都是可以的呢,希望楼主能解答一下疑惑,谢谢

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.