horance-liu / tensorflow-internals Goto Github PK
View Code? Open in Web Editor NEWIt is open source ebook about TensorFlow kernel and implementation mechanism.
It is open source ebook about TensorFlow kernel and implementation mechanism.
感谢您的技术分享书籍,已经完整看完一遍,不过还有不少细节没有深入,还需要对照源码进一步学习,在阅读过程中很好奇您所画的流程图都是用什么工具画的,感觉很美观,期待您的回复。
你好,我想添加本书为参考文献,能否给出本书的bib引用方式?谢谢
It would be very kind to release an english version for this book. I believe this talks about the workings of Tensorflow Framework, and am very interested to read this.
请问,tf2 的内核剖析不涉及是吧,我看这里的版本 是 1.2
书签只到1.1.1 DistBelief.正文中的目录也是空的
非常感谢分享经验和将这本书写出来。
为了让读者能够扩展阅读,建议添加一些相关的文献。
另外,如果能够给出相应的引用,可以更加符合学术规范
祝好
Can I buy you a cup of coffee, or a sixpack, really appreciated your work here.
非常感谢作者的专业讲解,这是目前我看到关于tf框架讲解最到位的一本书!请问刘光聪先生,纸质书什么时候出版?,这么好的书不出版可惜了。
如果没有出版打算,希望作者留下赞赏二维码,作者的付出值得得到回报。
This book is based on tf 1.2?
There are a lot of big changes between this version and current version.
Right?
首先所贴的示例code,个人认为第二行多余,让人容易误解,如下:
logits = tf.matmul(y4, w5) + b5
y = tf.nn.softmax(logits)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits( logits=logits, labels=t)
其次从这里的注释来看:https://github.com/tensorflow/tensorflow/blob/r1.0/tensorflow/examples/tutorials/mnist/mnist_softmax.py#L48
采用softmax_cross_entropy_with_logits,与先softmax、再手工计算交叉熵、再在batch内item间取交叉熵的均值,数学效果等同,似乎无法避免文中介绍的log(0)导致NaN问题。
tf 1.x版本看起来在1.13后会趋于稳定,社区的精力会转向2.x。
请问是否有计划写一个再版,基于1.13?
I am really enjoying this book! Thanks for releasing this! I was just wondering if section "6.3 Transfering graphs" will be filled in? It looks like an empty section awaiting content.
12.5 执行 有如下段落:
至此,整个 DirectSession.Run 解读完毕。但是,Partition 中节点如何被调度执行的, Partition 之间的 Send/Recv 是如何工作的呢?
因此,在最后一公里,还要探究三件事情。
- SendOp 与 RecvOp 的工作原理
- IntraProcessRendezvous 的工作原理
- Executor 的调度算法
仅看到12.6有关于
1.SendOp 与 RecvOp 的工作原理
的讨论。
没有看到
- IntraProcessRendezvous 的工作原理
- Executor 的调度算法"
的相关内容。
这部分内容是缺失了吗?后续有计划补充吗?
文中描述的学习速率衰减是将lr用placeholder的方式,feed进优化过程。在不同的train step间,通过python进行衰减计算。
任何optimizer理论上都可以采用这种技术,文中的如下描述:
“可以采用更好的优化算法,例如 AdamOptimizer。随着迭代过程的次数,学习速率将指 数级衰减,在模型训练后期可以得到一个更稳定的精度和损失曲线。”
或许会给人造成误解,认为lr衰减是某些特定optimizer(例如AdamOptimizer)引入的特性。
Will the text be available in English?
非常感谢您能分享自己的学习经验。在快速的翻过一遍之后有如下一些建议
一般性建议:
细节的问题
祝好
I compiled a kindle version here: https://github.com/simon-mo/tensorflow-internals/blob/40aea3a6f9cd2d2c4d321524a673efd61cb027ad/tensorflow-internals-kindle.pdf
Had to fix some issue to make it compile on my mac. That's why i'm not submitting a PR.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.