If instructional data is created using OpenAI's GPT3.5 or GPT4, it is possible to create a large amount of data as easily as with Stanford Alpaca. But the data created with OpenAI's API cannot be used for training commercial LLMs. Therefore, we aim to use Gemma to automatically generate instructional data that can be used for commercial purposes.
- Gemma-Alpaca-Data-13k contains 13K instruction-following data generated by Gemma-7b-it with prompts in Alpaca. This JSON file has the same format as Alpaca data, except the output is generated by Gemma-7b-it:
- instruction: str, describes the task the model should perform. Each of the 13K instructions is unique.
- input: str, optional context or input for the task.
- output: str, the answer to the instruction as generated by Gemma-7b-it.
Setup
pip install -r requirements.txt
Generate
python generate_instruction_gemma.py
This repo benefits from Stanford Alpaca and Gemma. Thanks for their wonderful works.