Coder Social home page Coder Social logo

tpoisonooo / how-to-optimize-gemm Goto Github PK

View Code? Open in Web Editor NEW
542.0 16.0 75.0 12.84 MB

row-major matmul optimization

Home Page: https://zhuanlan.zhihu.com/p/65436463

License: GNU General Public License v3.0

C 10.65% MATLAB 0.04% Makefile 0.06% Shell 0.01% Python 0.05% C++ 88.18% Assembly 0.01% Cuda 0.59% Objective-C 0.20% Mathematica 0.18% M 0.01% CMake 0.02%
gemm-optimization armv7 arm64 cuda cuda-kernel ptx vulkan int4

how-to-optimize-gemm's Introduction

how-to-optimize-gemm

English | 简体中文

News

2023/08 aarch64 add cmake and mperf, try -DMPERF_ENABLE=ON !

Introduction

row-major matmul optimization tutorial

backend armv7 aarch64 aarch64-int8 cuda cuda-int4 vulkan x86
support ✔️ ✔️ ✔️ ✔️ - ✔️

All backends and corresponding tutorials

backend tutorial
aarch64 GEMM 入门
aarch64 GEMM caching
aarch64-int8 -
armv7 ARMv7 4x4kernel 懒人优化小实践
cuda cuda 入门的正确姿势:how-to-optimize-gemm
cuda-int4 WIP int4 炼丹要术
vulkan 如何火急火燎地上手 Vulkan

Build and run

Usage is similar for all backends:

  1. Open the backend directory to be used, and change the OLD and NEW of makefile to the same implementation for the first run, for example
$ cd aarch64
$ cat makefile
OLD    := MMult_4x4_10
NEW   := MMult_4x4_10
..
  1. makefilewill compile and run the implementation whichNEWpoint at, and copyoutput_MMult_4x4_10.mtooutput_new.m`
$ make run
$ cat output_new.m
  1. It may not be intuitive to look at the numbers directly, so draw a line chart
$ python3 -m pip install -r ../requirements.txt
$ python3 plot.py

Differences between backends

Specific to each hardware, there are subtle differences:

  • NEW may choose a different name
  • vulkan/int4 needs prerequisitions

1. armv7 and aarch64

A. Prepare armv7/aarch64 linux development environment, Raspberry Pi/rk3399/aws arm server are all fine.

B. By default ARCH := native, build and run directly

$ cd armv8 && make run

2. aarch64 int8

chgemm is an int8 gemm library.

  • blue line is chgemm implementation
  • orange line is aarch64 fp32 peak

Compared to the code in this tutorial, the differences are:

  1. Dealing with the boundary problem, unlike the tutorial where only multiples of 4 are considered;
  2. Int8 reaches a maximum of 18.6 gflops (relative to the theoretical limit of fp32 is only 14.3 on RK3399, gemmlowp is about 12-14gflops);
  3. Based on symmetric quantization, input value range must be in [-127, +127], and -128 cannot appear;
  4. Built-in small example about how to integrate into android studio

chgemm has been merged into ncnn INT8 convolution implementation.

3. x86 original

flame referenced by x86 is the original implementation, with some differences from this repo:

  1. The original is column-major x86 SSE version
  2. Both are tutorials, and the MMult_4x4_17.c written now can reach 70% of the armv8.1 CPU peak
  3. The boundary problem is not dealt with now, only the case where MNK is a multiple of 4 is considered; sub_kernel also only writes the simplest kind of assembly. Practical needs a simple adjustment;
  4. In terms of drawing, octave was discarded (it is too troublesome to configure the environment once for embedded devices), and python was used instead.

4. CUDA

This version is faster than NVIDIA cuBLAS

  • green line is MMult_cuda_12 without tensorcore
  • blue line is cuBLAS without tensorcore

  1. Need to install cuda driver and nvcc by yourself
  2. CPU OpenBLAS is required to be the baseline
$ apt install libopenblas-dev

5. Vulkan

  1. vulkan build depends on kompute API packaging, see vulkan build documentation for details

  2. More about how to learn compute shader

6. CUDA int4

WIP

Some Tools

  • megpeak: For measuring hardware limit performance, support arm/x86/OCL..

  • perf: Available in linux system tools, for system-level performance analysis and disassembly

  • YHs_Sample: dalao 's implementation

  • mperf: optimization tools

License

GPLv3

how-to-optimize-gemm's People

Contributors

luqiang-guo avatar megvii-mge avatar tpoisonooo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

how-to-optimize-gemm's Issues

how to overlap the share2register and computing process?

I have another question about MMult_cuda_12.cu
Honestly, I don't know how to overlap the share2register and computing process. Is it the asm(PTX) that make them run parallelly? The instructions are sequantially, so how could these two parts of code hide each other?
part1: loading shared-memory to panel
lds128(panelA[pp][0], panelA[pp][1], panelA[pp][2], panelA[pp][3],
aptr_base + ((subk + 1) % 8) * SMEM_LDA * sizeof(float));
lds128(panelA[pp][4], panelA[pp][5], panelA[pp][6], panelA[pp][7],
aptr_base + (((subk + 1) % 8) * SMEM_LDA + 64) * sizeof(float));
lds128(panelB[pp][0], panelB[pp][1], panelB[pp][2], panelB[pp][3],
bptr_base + ((subk + 1) % 8) * SMEM_LDB * sizeof(float));
lds128(panelB[pp][4], panelB[pp][5], panelB[pp][6], panelB[pp][7],
bptr_base + (((subk + 1) % 8) * SMEM_LDB + 64) * sizeof(float));

part2: computing the result of panel-data
#pragma unroll
for (int i = 0; i < 8; ++i) {
#pragma unroll
for (int j = 0; j < 8; ++j) {
sum[i][j] += panelA[subk % 2][i] * panelB[subk % 2][j];
}
}

关于测试浮点峰值的问题

image
我现在跑的芯片型号是NVIDIA,ARMv8 Processor rev 0 (v8l)。
我看知乎文章里说测试浮点峰值时FMA指令的排布数量 = FMA的发射数 * FMA指令的延迟。我并没有查到上面这个芯片的手册。但是我看了A57的手册,里面是这样记录的:
image
FMA指令的延迟是10,吞吐量是2。我不太清楚这个吞吐是否代表着芯片可以同时发射两条FMA指令(是芯片发射吗),但是我分别放置了10条FMA指令(OP_FLOATS = 80)和20条FMA指令(OP_FLOATS = 160)都测试了,发现在10条的时候是16.095492 GFLOPS, 20条是 18.759214 GFLOPS。这是什么原因呢?
我的猜测有两个:
1.10条FMA指令确实不是测试这款芯片的浮点峰值所需要的指令数。
2.可能编译器自动开启了多线程?这个比较有可能,因为从4条指令到10条指令性能差不多翻倍,但是10-20只增加了一点。

cuda版本非m=n=k运算出错

如kernel_v3中:
float *begin_a = a + by * BLOCK * k; //by->n
float *begin_b = b + bx * BLOCK; //bx->m

当A,B不为方阵时会出错,例如m=k=256,n=128.

Row-majored or Column-majored

I found the definition of MMult0.c and MMult1.c for multi-dimensional array storage, would you like to take a look and see whether both of them are correct?
For MMult0.c

#define A(i,j) a[ (i)*lda + (j) ]
#define B(i,j) b[ (i)*ldb + (j) ]
#define C(i,j) c[ (i)*ldc + (j) ]

For MMult1.c

#define A(i,j) a[ (j)*lda + (i) ]
#define B(i,j) b[ (j)*ldb + (i) ]
#define C(i,j) c[ (j)*ldc + (i) ]

As you see, the positions of i and j are inverted.

cuda代码中错误

cuda代码中,MMult_cuda_7.cu中30行,b_ptr += 64 * k,应该是b_ptr += 64 * n,因为是方阵,所以结果对上了

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.