This is released weights recreated from Stanford Alpaca, an experiment in fine-tuning LLaMA on a synthetic instruction dataset.
Yes! Announcement thread to our frontend where you can try the 7B: https://twitter.com/PointNetwork/status/1637178814210908160
-
Put LLaMA weights into
original/
folder, such that 7B version would be atoriginal/7B
-
Download point-alpaca diffs into
encrypted/
folder:
wget -P encrypted/ -i filelist.txt
- Run the following command to decrypt:
for f in "encrypted"/*; do if [ -f "$f" ]; then python3 decrypt.py "$f" "original/7B/consolidated.00.pth" "result/"; fi; done
You will have finetuned weights in the result/
folder.
Now that you have them, you can delete the files in encrypted/
folder.
Other people will probably build better UIs, but for now, try running python3 chat.py
But before that, install requirements via pip3 install -r requirements.txt
(We really recommend installing it in a separate environment, for example, via conda
)
Find us in our Telegram chat: https://t.me/pointnetworkchat
We are not allowed to publish weights for LLaMA, of course, even finetuned, but there is no problem publishing the difference, a patch that we suggest to apply to the files. The encryption is a simple XOR between files (not very secure - not recommended for other applications!), ensuring that only the people that have access to the original weights (from completely legal sources, of course) can transform them into finetuned weights.
13B is coming for sure, larger versions - maybe. Consider supporting us if you want it done faster. :)