Coder Social home page Coder Social logo

ppl.pmx's Introduction

About OPMX

Open PPL Model Exchange (OPMX) is an open ecosystem that empowers AI developers to choose the right tools as their project evolves. OPMX provides an open source format for AI models. It defines an extensible computation graph model, as well as definitions of built-in operators and standard data types.

Currently OPMX focus on the capabilities and hardware friendliness needed for Large Language Model(LLM) inferencing.

Important Notice

  • PMX has changed to OPMX at 25/04/2024.
  • And the domain of operators are also changed to opmx(refers to TOC).
  • You can find the old code at llm_v1

Operator spec

Table of Contents: Link

About add new operator: Link

About update an operator's version: Link

Use OPMX Python API

OPMX provides functional API based on torch.autograd.Function.

Clone the OPMX repo, and import torch_function like this:

import pmx_llm.torch_function as OPMX

And then use it as Pytorch's functional API:

norm, skip_out = OPMX.skip_rms_norm(x, weight, skip_in, -1, eps)

We can use these API in pytorch to custom your own model.

All OPMX function could be exported as custom operators by torch.onnx.export.

Model Zoo

Some opensource model are provided in our model zoo.

Currently models:

ppl.pmx's People

Contributors

alcanderian avatar caiyesd avatar chielonewctle avatar helloyongyang avatar jzz24 avatar openppl-public avatar ouonline avatar syheliel avatar vincent-syr avatar yinfan98 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

ppl.pmx's Issues

baichuan13B static/dynamic batching 生成一定长度的句子之后结果出现分歧

命令行

OMP_NUM_THREADS=4 torchrun --nproc_per_node 1 Demo13B.py --ckpt_dir /mnt/hpc/share/baichuan_pmx_model/ --tokenizer_path /mnt/hpc/share/baichuan_13B_model/tokenizer.model --fused_qkv 1 --fused_kvcache 1 --auto_causal 1 --quantized_cache 1 --dynamic_batching 0

OMP_NUM_THREADS=4 torchrun --nproc_per_node 1 Demo13B.py --ckpt_dir /mnt/hpc/share/baichuan_pmx_model/ --tokenizer_path /mnt/hpc/share/baichuan_13B_model/tokenizer.model --fused_qkv 1 --fused_kvcache 1 --auto_causal 1 --quantized_cache 1 --dynamic_batching 1

结果0
I believe the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the

结果1
I believe the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of life is the meaning of

Good first issue

Hi guys, I see your project in OSPP. I'm wondering are there any good issues for kick start?

转换llama2 70B模型时遇到的问题

如果直接用Model_zoo下的facebook来转换llama2 70B,需要MP=8,但是机器上并没有8张卡,设置--nproc_per_node 8会出错
如果用huggingface的转换脚本则会报错:Traceback (most recent call last):
File "/data/asc24llama/zbtrs/ppl.pmx/model_zoo/llama/huggingface/ConvertWeightToPmx.py", line 132, in
main()
File "/data/asc24llama/zbtrs/ppl.pmx/model_zoo/llama/huggingface/ConvertWeightToPmx.py", line 126, in main
write_pmx_model(
File "/data/asc24llama/zbtrs/ppl.pmx/model_zoo/llama/huggingface/ConvertWeightToPmx.py", line 92, in write_pmx_model
wk = unpermute(hf_model_state_dict[f"model.layers.{layer_i}.self_attn.k_proj.weight"], num_kv_heads, key_value_dim, hidden_dim)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/asc24llama/zbtrs/ppl.pmx/model_zoo/llama/huggingface/ConvertWeightToPmx.py", line 83, in unpermute
return w.view(n_heads, 2, dim1 // n_heads // 2, dim2).transpose(1, 2).reshape(dim1, dim2)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: shape '[8, 2, 512, 8192]' is invalid for input of size 8388608

Use A10 to export llama 13B model OOM.

Running Demo.py successful, but export model OOM.
Device: A10 x 2

command:
OMP_NUM_THREADS=1 torchrun --nproc_per_node 2 Export.py --ckpt_dir /data/workspace/CodeLlama-13b --tokenizer_path /data/workspace/CodeLlama-13b/tokenizer.model --export_path /data/workspace/13_v2 --fused_qkv 1 --fused_kvcache 1 --auto_causal 1 --quantized_cache 1 --dynamic_batching 1

File "/data/workspace/ppl/ppl.pmx/model_zoo/llama/facebook/Export.py", line 44, in
fire.Fire(main)
File "/root/miniconda3/envs/test/lib/python3.10/site-packages/fire/core.py", line 141, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
File "/root/miniconda3/envs/test/lib/python3.10/site-packages/fire/core.py", line 475, in _Fire
component, remaining_args = _CallAndUpdateTrace(
File "/root/miniconda3/envs/test/lib/python3.10/site-packages/fire/core.py", line 691, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
File "/data/workspace/ppl/ppl.pmx/model_zoo/llama/facebook/Export.py", line 40, in main
generator.export(export_path)
File "/data/workspace/ppl/ppl.pmx/model_zoo/llama/facebook/../../llama/modeling/dynamic_batching/Pipeline.py", line 291, in export
torch.onnx.export(
File "/root/miniconda3/envs/test/lib/python3.10/site-packages/torch/onnx/utils.py", line 506, in export
_export(
File "/root/miniconda3/envs/test/lib/python3.10/site-packages/torch/onnx/utils.py", line 1548, in _export
graph, params_dict, torch_out = _model_to_graph(
File "/root/miniconda3/envs/test/lib/python3.10/site-packages/torch/onnx/utils.py", line 1113, in _model_to_graph
graph, params, torch_out, module = _create_jit_graph(model, args)
File "/root/miniconda3/envs/test/lib/python3.10/site-packages/torch/onnx/utils.py", line 989, in _create_jit_graph
graph, torch_out = _trace_and_get_graph_from_model(model, args)
File "/root/miniconda3/envs/test/lib/python3.10/site-packages/torch/onnx/utils.py", line 893, in _trace_and_get_graph_from_model
trace_graph, torch_out, inputs_states = torch.jit._get_trace_graph(
File "/root/miniconda3/envs/test/lib/python3.10/site-packages/torch/jit/_trace.py", line 1268, in _get_trace_graph
outs = ONNXTracedModule(f, strict, _force_outplace, return_inputs, _return_inputs_states)(*args, **kwargs)
File "/root/miniconda3/envs/test/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/root/miniconda3/envs/test/lib/python3.10/site-packages/torch/jit/_trace.py", line 127, in forward
graph, out = torch._C._create_graph_by_tracing(
File "/root/miniconda3/envs/test/lib/python3.10/site-packages/torch/jit/_trace.py", line 114, in wrapper
tuple(x.clone(memory_format=torch.preserve_format) for x in args)
File "/root/miniconda3/envs/test/lib/python3.10/site-packages/torch/jit/_trace.py", line 114, in
tuple(x.clone(memory_format=torch.preserve_format) for x in args)
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 68.00 MiB (GPU 1; 22.20 GiB total capacity; 21.58 GiB already allocated; 12.12 MiB free; 21.58 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Have tried settings below:
export PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:32
export PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:64

[LLaMA2-7b] Warning: The shape inference of opmx::ParallelEmbedding type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. (function UpdateReliable)

script:

popd ./model_zoo/llama/facebook/
export MP=1
OMP_NUM_THREADS=${MP} torchrun --nproc_per_node ${MP} \
Export.py --ckpt_dir /data/model_weight/llama/llama-2-7b/ \
--tokenizer_path /data/model_weight/llama/tokenizer.model \
--export_path ./out \
--fused_qkv 1 --fused_kvcache 1 --auto_causal 1 \
--quantized_cache 1 --dynamic_batching 1 

warning:

Loaded: layers.27.attention.wv.weight -> layers.27.attention.wqkv.weight[torch.Size([4096, 4096])]
Loaded: layers.27.attention.wo.weight -> layers.27.attention.wo.weight[torch.Size([4096, 4096])]
Loaded: layers.27.feed_forward.w1.weight -> layers.27.feed_forward.wu.weight[torch.Size([11008, 4096])]
Loaded: layers.27.feed_forward.w2.weight -> layers.27.feed_forward.w2.weight[torch.Size([4096, 11008])]
[ignore]
Loaded: layers.31.ffn_norm.weight -> layers.31.ffn_norm.weight[torch.Size([4096])]
rope.freqs is not loaded.
Loaded in 9.71 seconds
[rank0]:[W shape_type_inference.cpp:1973] Warning: The shape inference of opmx::ParallelEmbedding type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. (function UpdateReliable)
[rank0]:[W shape_type_inference.cpp:1973] Warning: The shape inference of opmx::ColumnParallelLinear type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. (function UpdateReliable)
[rank0]:[W shape_type_inference.cpp:1973] Warning: The shape inference of opmx::Reshape type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. (function UpdateReliable)
[ignore]
[rank0]:[W shape_type_inference.cpp:1973] Warning: The shape inference of opmx::ColumnParallelLinear type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. (function UpdateReliable)

怎么转换其它模型

你好,请问怎么转换其它模型,比如bloom,whisper。
是参考llama的几个转换命令吗?
另外请教一下模型的parame.json怎么获取得到呢,我看在hf上下载的模型里面都没有这个文件。

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.