carperai / openelm Goto Github PK
View Code? Open in Web Editor NEWEvolution Through Large Models
License: MIT License
Evolution Through Large Models
License: MIT License
Since DP doesn't work: #29 (comment)
This will be important to add - can do it after the v0.1 release but preferably ASAP.
It would be nice to add more QD algorithms to the library, so we can support a wider variety of QD experiments. CMA-MAP-Elites, Multi-Emitter MAP-Elites, and Voronoi MAP-Elites are probably a good start.
The sandbox doesn't build.
To reproduce, just clone and follow the instructions here: https://github.com/CarperAI/OpenELM/tree/main/elm/sandbox
I expected to build the sandbox with docker.
I'm running PopOS! Linux
We've selected a depth of just 1 for each niche (self.bins
) in our implementation of MAP-ELITES correct (only save the best result for that niche)?
Or is n_bins
the depth, in which case we'd want to edit line 55 to insert the new performance if it is in the top n_bins
fitness for that niche.
https://github.com/mathyouf/ELM/blob/main/map_elites/map_elites.py#L55
Outcome when running python -m framework_simulator
in sodaracer_env in my current environment.
Tested on both sodaracer json files in example_bodies
, and on branches main, build-lego. I'm using Ubuntu 20.04, with Python 3.8.13.
Has anyone tried running this with a different environment setup in the case that you can see the sodaracer agent?
https://arxiv.org/abs/2211.01910
Add a feature to automatically optimise the prompt for prompt-based mutation, using a metric like fitness. Ideally also include an option to evolve it in an ELM loop (perhaps write an OpenELM environment for prompt evolution).
https://github.com/CarperAI/ELM/blob/6bf347c13301afadfb093616a6b84d64a59a686f/map_elites/map_elites.py#L59-L62
To support replicating the datasets for Stage 2, we'll need to record all examples mapped to the archive without discarding them.
Not sure if this is within scope of the stage 1 commit, but what's a good way to add support for this option (for ELM sodaracer), while keeping this more general as a map_elites implementation? Would this make sense as a parameter/option in the env that can be checked?
Originally posted by @daia99 in #9 (comment)
there are 4 different kinds examples as shown in the paper https://arxiv.org/abs/2302.12170
But right now, not all are implemented here. will they be added here in future?
Are training checkpoints available for the huggingface models (the paper states this is the case) and, if so, how are they accessible? E.g., this is provided as option "revision" to from_pretrained
for OLMo models.
Thanks
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.