Comments (3)
A directed acyclic graph is a directed graph that has no cycles. A vertex v of a directed graph is said to be reachable from another vertex u when there exists a path that starts at u and ends at v. As a special case, every vertex is considered to be reachable from itself (by a path with zero edges).
from micrograd.
It's a nit that won't matter most of the time but the topo sort implementation doesn't work in case you have cycles in the graph.
i.e. there is a hard assumption you're operating over a DAG.
It's correct. On the other hand, if it is a Dag, can we simply write the backward function as
def backward(self):
self._backward()
for child in self._prev:
child.backward()
from micrograd.
It's a nit that won't matter most of the time but the topo sort implementation doesn't work in case you have cycles in the graph.
i.e. there is a hard assumption you're operating over a DAG.It's correct. On the other hand, if it is a Dag, can we simply write the backward function as
def backward(self): self._backward() for child in self._prev: child.backward()
No. Your implementation is essentially a DFS, while topological sort requires a BFS. For this simple case your code could be wrong:
b = 2*a
c = a + b
with topological sort, it's guaranteed that the back propagation goes in the order of c -> b -> a. With your code, the order could be c -> a -> b, which is wrong.
from micrograd.
Related Issues (20)
- Issue with zero_grad? HOT 5
- _backward as lambdas? HOT 3
- Sequential MLP implementation
- Zero_grad only zeros the weight and bias nodes, not the nodes for addition and multiplication HOT 1
- A tensor version for micrograd inpired by this work
- Rename engine.py to value.py
- Vectorized implementation with PyTorch flavor
- Vectorized modification with GPU support.
- Another MiniGrad with the RAdam optimizer.
- Ensure backward() is idempotent HOT 4
- Regarding the gradient update of the __sub__ operation HOT 1
- Reseting the grad of weights and biases is not enough HOT 1
- Adjusting parameters by sign and magnitude of gradient HOT 1
- backward member implementation question HOT 1
- type annotation lacking/ maybe also add docstrings.
- For addition adding incrementing grading makes sense, I can't make sense out of the incrementing it for multiplication too, potential bug? HOT 1
- radd
- Is `y = x * x` an edge case?
- Question/Idea: Automatic Gradient Clearing
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from micrograd.