Coder Social home page Coder Social logo

Comments (4)

Panaetius avatar Panaetius commented on August 29, 2024

I've added references to all the benchmark tasks in the code of the implementations in 10ed26f

We currently have https://mlbench.readthedocs.io/projects/mlbench_benchmarks/en/latest/readme.html#benchmark-implementations to talk about implementation details of the implementations. The state of the documentation for this is not that great at the moment, since that task description at https://mlbench.readthedocs.io/en/latest/benchmark-tasks.html#a-image-classification-resnet-cifar-10 contain a lot of implementation details that are not part of the task itself. At least I think things like batch size are implementation details and not part of a task itself. That really depends on how exactly we delineate Tasks from Implementations, which has been fuzzy so far.

The reason I bring this up is that it would be nice to have a Readme.rst in each implementation folder that documents the implementation and links back to the task it implements. That readme can be included automatically in the benchmark docs at https://mlbench.readthedocs.io/projects/mlbench_benchmarks/en/latest/readme.html discussing implementation details, which in turn are linked to from the task descriptions here https://mlbench.readthedocs.io/en/latest/benchmark-tasks.html

But for this to make any sense and not be confusing, Task and Implementation documentation have to be clearly separated.

from mlbench-benchmarks.

martinjaggi avatar martinjaggi commented on August 29, 2024

very good, let's do this

from mlbench-benchmarks.

Panaetius avatar Panaetius commented on August 29, 2024

I added first rudimentary Readme.rst's in 343fcb4 and linked them to the generated docs.

How do we split Task & Implementation? I.e. What is still part of a Task and what is Implementation specific?

from mlbench-benchmarks.

martinjaggi avatar martinjaggi commented on August 29, 2024

looks good.

about the split, the idea is that the result metrics (the official ones) need to be reproducible from the task description alone. so the parameters described there will basically stay, but the task needs to be implementable say both in pytorch and TF .... the rest can be in implementation (things like random seed etc are there, even if they have slight influence on results. hopefully only very little).

from mlbench-benchmarks.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.