Coder Social home page Coder Social logo

flowritecom / flow-merge Goto Github PK

View Code? Open in Web Editor NEW
10.0 5.0 0.0 186 KB

flow-merge is a powerful Python library that enables seamless merging of multiple transformer-based language models using the most popular merge methods such as model soups, SLERP, ties-MERGING or DARE.

License: Apache License 2.0

Python 100.00%
llms model-merging

flow-merge's Issues

Bug: Hugging Face hub authentication

๐Ÿ› Bug report

Summary

Currently, the library does not log the user into HF hub even if the token is passed in the config.yaml.

Actual Behavior

If a gated or privated model is passed in the models list, the load config function errors out:

flow-merge run --config ./biomistral/slerp_config.yaml --model_name biomistral_slerp_7b
flow_merge.lib.merge_runner - INFO - Starting merge...
flow_merge.lib.merge_runner - ERROR - Merge error: RuntimeError - Failed to load config for model mistralai/Mistral-7B-Instruct-v0.2

The error message could be improve too..

Tried to load the model manually and this was the actual error from transformers library:

...
    config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)
  File "/opt/conda/envs/flow-merge/lib/python3.10/site-packages/transformers/configuration_utils.py", line 688, in _get_config_dict
    resolved_config_file = cached_file(
  File "/opt/conda/envs/flow-merge/lib/python3.10/site-packages/transformers/utils/hub.py", line 416, in cached_file
    raise EnvironmentError(
OSError: You are trying to access a gated repo.
Make sure to have access to it at https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2.
401 Client Error. (Request ID: Root=1-662e549f-06bec9942e4a5164323cb0d1;498fe681-a1ee-4564-94c3-04a3098285df)

Cannot access gated repo for url https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2/resolve/main/config.json.
Access to model mistralai/Mistral-7B-Instruct-v0.2 is restricted. You must be authenticated to access it.

Expected behavior

If hf token is passed, the user should be logged in.

To Reproduce

Merge configuration file (if relevant):

method: slerp
method_global_parameters:
  t: 0.5
base_model: BioMistral/BioMistral-7B
models:
  - model: BioMistral/BioMistral-7B
  - model: mistralai/Mistral-7B-Instruct-v0.2
tokenizer:
  mode: base
  interpolation_method: linear
directory_settings:
  output_dir: ./biomistral/biomistral_slerp_7b/
hf_token:
  token: <your_token>
  trust_remote_code: True
device: cpu

Steps to reproduce the behavior:

  1. Create the config file above
  2. Run flow-merge run --config <path_to_file> --model_name model

Environment

  • OS: Ubuntu 22.04.4 LTS
  • Python version:Python 3.10.13
  • Library version: 0.1.0
  • Other relevant dependencies: NA

Bug: No supported operating systems mentioned & no testing details

๐Ÿ› Bug report

Summary

The repository documentation lacks explicit information regarding the supported operating systems.

Actual Behavior

The project's documentation or README file does not specify which operating systems are officially supported.

Expected behavior

Ideally, the repository should clearly state the compatible operating systems, whether it's Windows, macOS, Linux, or all of them, to guide users before they start using the project.

Additional context

The absence of operating system compatibility information might cause confusion for developers on unsupported platforms.

Possible Solution or Workaround

Suggested action:

The maintainer should update the repository's documentation to include a section on supported operating systems, ensuring users can quickly determine if the project is compatible with their development environment.

Bug: Scaling coefficient is `None` if not passed in config.

๐Ÿ› Bug report

Summary

The scaling coefficient is None if not passed in the config file.

Actual Behavior

The scaling coefficient takes None if it's not passed in the config file, resulting in a merge error.

Expected behavior

The scaling coefficient should use the default value if it's not passed in the config file.

To Reproduce

Merge configuration file (if relevant):

method: dare-ties-merging
method_global_parameters:
  p: 0.3
base_model: mistralai/Mistral-7B-v0.1
models:
  - model: mistralai/Mistral-7B-v0.1
  - model: mistralai/Mistral-7B-Instruct-v0.1
    weight: 0.5
  - model: BioMistral/BioMistral-Safetensors
    weight: 0.5
directory_settings:
  output_dir: ./biomistral/biomistral_dare_ties_7b/
hf_token:
  token: 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
  trust_remote_code: True
device: cuda

Environment

  • OS: Ubuntu 22.04.4 LTS
  • Python version:Python 3.10.13
  • Library version: 0.1.0
  • Other relevant dependencies: NA

Bug: device python object written in the config of the model card and keys sorted

๐Ÿ› Bug report

Summary

Python object is written in the model card's config file

Actual Behavior

device python object is written in the model card.

Also, keys are being sorted instead of keeping the original order.

See https://huggingface.co/flow-ai-llm/biomistral_slerp_7b

Expected behavior

python object not written in config of model card.
Keys ordered as in the original config.

To Reproduce

Merge configuration file (if relevant):

method: slerp
method_global_parameters:
  t: 0.5
base_model: BioMistral/BioMistral-Safetensors
models:
  - model: BioMistral/BioMistral-Safetensors
  - model: mistralai/Mistral-7B-Instruct-v0.2
tokenizer:
  mode: base
  interpolation_method: linear
directory_settings:
  output_dir: ./biomistral/biomistral_slerp_7b/
hf_token:
  token: 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
  trust_remote_code: True
device: cpu

Environment

  • OS: Ubuntu 22.04.4 LTS
  • Python version:Python 3.10.13
  • Library version: 0.1.0
  • Other relevant dependencies: NA

Feature: Allow for .bin models

Description

Allow for .bin model weight file format. Currently, it tries to find .safetensors files in the hf repo and it errors our if not found:

flow_merge.lib.merge_runner - ERROR - Merge error: EntryNotFoundError - 404 Client Error. (Request ID: Root=1-662b6d7f-44142b6d056f9513353e0c40;0454e034-eaa9-438b-bda0-33147d8e6f4b)

Entry Not Found for url: https://huggingface.co/Doctor-Shotgun/cat-v1.0-13b/resolve/main/model.safetensors.

Feature request: Enable passing HF_TOKEN from environment

๐Ÿš€ Feature Request

Summary

Right now we are providing HF_TOKEN as a variable to merge_config.yaml file. I suggest we also enable passing it via env variable and catch that automatically.

[Optional] Implementation

TBA, assigned to myself

Bug: Merged model is uploaded to hf as private even if it would be set to public

๐Ÿ› Bug report

Summary

running the model upload command with --Private False -> still uploads the model to hf as private

Expected behavior

If --private False model should be uploaded as public to hf

To Reproduce

run model upload command for example
flow-merge upload --model_dir ./merged_model --username <hf_user_id> --model_name qwen_merge --token <hf_token> --private False

Improve validation error messages for better user experience

Description

The current validation error messages in the flow-merge validate --config ... tool could be improved to provide a better user experience. When a user encounters a validation error, the message should clearly explain what went wrong and provide actionable steps to resolve the issue.

For example, the current error message for a missing required field in the configuration file looks like this:

Configuration file is invalid: 1 validation error for ValidatedInputData
models.0.model
  Field required [type=missing, input_value={'models': 'TheBloke/Llama-2-13B-fp16'}, input_type=dict]
    For further information visit https://errors.pydantic.dev/2.4/v/missing

An improved error message could start with:

Error: Missing required field '<name_of_field>' in the configuration file.

...

Example of a correct model entry:
models:
  - model: TheBloke/Llama-2-13B-fp16
    weight:
    
...

Improving log message when task vectors are close to zero

Description

If there is almost no difference between the tensor of the base model and the tensor of a model, task vector values are close to zero. If all the task vectors are close to zero, then the merge method of the merge method class just returns the base tensor. The current warning message "No task vectors. Returning the base model tensor." is not very helpful:

class TaskArithmetic(MergeMethod):
    def merge(
        self,
        weight: ModelWeight,
        base_model_tensor: torch.Tensor,
        models_tensors: Dict[Model, torch.Tensor],
        merge_method_settings: Union[TaskArithmeticSettings, TiesMergingSettings],
        base_model: Model,
    ) -> torch.Tensor:
        base_tensor_dtype = base_model_tensor.dtype

        task_vectors: Dict[Model, torch.Tensor] = self._get_task_vectors(
            base_model_tensor, models_tensors
        )

        if not task_vectors:
            logger.warning("No task vectors. Returning the base model tensor.")
            return base_model_tensor
            
            ...

It should provide a better explanation.

Feature request: Enable frankenmerging

๐Ÿš€ Feature Request

Summary

Enable frakenmerging technique.

Motivation

Some frankenmerges have surprised the community with their quality of outputs.

Additional context

Bug: Generated model card shouldn't include ValidatedInputData object

Description

The ValidatedInputData object seems to be written to the model card:

---
library_name: transformers
tags:
- flow-merge
- merge

---
# neural_story

This model is the result of merge of the following models made with flow-merge:

- Base model:
	- mistralai/Mistral-7B-Instruct-v0.2
- Models:
	- NeuralNovel/Mistral-7B-Instruct-v0.2-Neural-Story


## flow-merge config

The following configuration was used to merge the models:

```yaml
!!python/object:flow_merge.lib.merge_config.ValidatedInputData
__dict__:
  base_model: mistralai/Mistral-7B-Instruct-v0.2
  models:
  - !!python/object:flow_merge.lib.merge_settings.RawModelDict
    __dict__:
      path_or_id: mistralai/Mistral-7B-Instruct-v0.2
      weight: null
    __pydantic_extra__: null
    __pydantic_fields_set__: !!set
      path_or_id: null
    __pydantic_private__: null
  - !!python/object:flow_merge.lib.merge_settings.RawModelDict
    __dict__:
      path_or_id: NeuralNovel/Mistral-7B-Instruct-v0.2-Neural-Story
      weight: 0.75
    __pydantic_extra__: null
    __pydantic_fields_set__: !!set
      weight: null
      path_or_id: null
    __pydantic_private__: null
  method: !!python/object/apply:flow_merge.lib.constants.MergeMethodIdentifier
  - addition-task-arithmetic
  device: !!python/object/apply:flow_merge.lib.constants.DeviceIdentifier
  - cpu
  method_global_parameters: !!python/object:flow_merge.lib.merge_settings.MethodGlobalParameters
    __dict__:
      scaling_coefficient: 1.0
      normalize: false
      p: null
      top_k: null
      t: null
    __pydantic_extra__: null
    __pydantic_fields_set__: !!set
      scaling_coefficient: null
      normalize: null
    __pydantic_private__: null
  directory_settings: !!python/object:flow_merge.lib.merge_settings.DirectorySettings
    __dict__:
      cache_dir: null
      local_dir: ./models
      output_dir: ./neural_story
    __pydantic_extra__: null
    __pydantic_fields_set__: !!set
      output_dir: null
      cache_dir: null
      local_dir: null
    __pydantic_private__: null
  hf_hub_settings: !!python/object:flow_merge.lib.merge_settings.HfHubSettings
    __dict__:
      token: null
      trust_remote_code: false
    __pydantic_extra__: null
    __pydantic_fields_set__: !!set
      token: null
      trust_remote_code: null
    __pydantic_private__: null
  tokenizer_settings: !!python/object:flow_merge.lib.merge_settings.TokenizerSettings
    __dict__:
      mode: base
      interpolation_method: linear
    __pydantic_extra__: null
    __pydantic_fields_set__: !!set {}
    __pydantic_private__: null
__pydantic_extra__: null
__pydantic_fields_set__: !!set
  hf_hub_settings: null
  method_global_parameters: null
  base_model: null
  device: null
  models: null
  directory_settings: null
  method: null
__pydantic_private__: null

Fix: `flow-merge inputs` command output is not complete and needs a revamp

๐Ÿ› Bug report

Summary

It's at first draft stage. To be completed.

s@zappacosta ~/repos/april-oss/flow-merge
 (fix/10-bug-generated-model-card-shouldnt-include-validatedinputdata-object +*)$ flow-merge inputs                     31.3s ๎‚ณ Fri 26 Apr 2024 07:49:27 PM EEST

# Required parameters
- 'base_model': 			 the base model to be used for merging
- 'models': 				 list of dictionaries, each representing a model to be merged
 	- 'model': 				 each model dictionary should have a 'model' property specifying the model path or identifier
 	- 'weight': 				 the 'weight' property in a model dictionary is optional and specifies the weight of the model during merging
- 'method': 				 the merge method to be used, one of ['addition-task-arithmetic','ties-merging','slerp','dare-ties-merging','model-soup','passthrough']

# Optional parameters
- 'device': 				 the device to be used for merging one of ['cpu','cuda']
- 'method_global_parameters': 		 global parameters for the merge method
 	- 'normalize': bool				 lorem ipsum
 	- 'p': float					 lorem ipsum
 	- 'scaling_coefficient': float		 lorem ipsum
 	- 't': float					 lorem ipsum
 	- 'top_k': float				 lorem ipsum
- 'directory_settings': 		 directories for caching, loading, and saving models
 	- 'cache_dir': str				 lorem ipsum
 	- 'local_dir': str					 lorem ipsum
 	- 'output_dir': str		 lorem ipsum
- 'hf_hub_settings': 			 settings for interacting with the Hugging Face Hub
 	- 'token': str				 lorem ipsum
 	- 'trust_remote_code': bool					 lorem ipsum
- 'tokenizer_settings': 		 settings for the tokenizer used with the merged model
 	- 'interpolation_method': str		 lorem ipsum
 	- 'mode': str				 lorem ipsum

Feature request: dtype conversions and optimizations

๐Ÿš€ Feature Request

Summary

Optimize data type (dtype) for certain operations to improve performance and memory efficiency. For example, use int for tensor operations with masks.

Also, allow for dtype settings in config files.

Motivation

  • Improve performance
  • Give user more control over the dtype of the resulting model

[Optional] Implementation

Additional context

Feature request: Create a unique subdirectory for the merged model

๐Ÿš€ Feature Request

Summary

Currently the model output directory is specified in the merge configuration yaml file. While I can specify here a subfolder for the model, I believe it to be more intuitive and efficient to automatically create a unique subfolder per merge.

Motivation

This change helps to avoid accidental over-writes of the previously merged model.

[Optional] Implementation

add unique model name/identifier and create subfolder where the merged model is stored.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.