Coder Social home page Coder Social logo

ghostjat / np Goto Github PK

View Code? Open in Web Editor NEW
8.0 3.0 1.0 1.42 MB

A Lite & Memory Efficient PHP Library for Scientific Computing

Home Page: https://ghostjat.github.io/Np/

License: MIT License

PHP 93.00% C 7.00%
php8 ffi blas numphp lite memory matrix vector computing scientific-computing

np's Introduction

Scrutinizer Code Quality Packagist PHP Version Support Build Status Code Intelligence Status GitHub contributors GitHub commit activity GitHub last commit Packagist Version GitHub code size in bytes Lines of code GitHub top language

Np

Description


Lite, Fast & Memory Efficient Mathematical PHP library for scientific computing

Np(numphp) is a library that provides objects for computing large sets of numbers in PHP.

Installation

Install Np into your project with Composer:

$ composer require ghostjat/np

##Sample Code

require __DIR__ . '/../vendor/autoload.php';
use Np\matrix;

$ta = matrix::randn(1000, 1000);    
$tb = matrix::randn(1000, 1000); // to generate random 2d matrix
$ta->dot($tb);                  // do a dot operation on given matrix
$ta->getMemory();              // get memory use
$ta->time();                  // get time
/**
 * 7.7mb
 * Time-Consumed:- 0.18390893936157
 */

Synopsis

WARNING:
This module is in its early stages and should be considered a Work in Progress.The interface is not final and may change in the future.

Requirements

  • PHP 8+ 64bit with ffi & #libblas, #liblapacke

Make sure you have all the necessary tools installed such as FFI, libblas, liblapacke.

Performance

System Conf:- Intel(R) Core(TM) i3-2370M CPU @ 2.40GHz 64bit Memory:- 8GB php:- 8.0.5 64bit

Current Benchmarks of this library

Benckmark

Data Size :- [500x500] Revolutions:- 5 Iterations:- 5

subject mem_peak best mode mean worst stdev
sum 3.606mb 0.014s 0.014s 0.015s 0.015s 0.000s
multiply 8.589mb 0.070s 0.071s 0.071s 0.071s 0.000s
lu 4.648mb 0.064s 0.065s 0.065s 0.068s 0.001s
eign 2.801mb 0.085s 0.086s 0.086s 0.088s 0.001s
cholesky 1.621mb 0.001s 0.001s 0.001s 0.001s 0.000s
svd 3.706mb 0.126s 0.126s 0.127s 0.133s 0.002s
normL2 1.621mb 0.003s 0.003s 0.003s 0.003s 0.000s
Pinverse 4.903mb 0.156s 0.156s 0.158s 0.163s 0.003s
inverse 1.819mb 0.016s 0.016s 0.016s 0.017s 0.000s
normL1 1.621mb 0.001s 0.001s 0.001s 0.001s 0.000s
dotMatrix 3.769mb 0.006s 0.006s 0.006s 0.006s 0.000s
det 4.662mb 0.066s 0.066s 0.067s 0.067s 0.000s
rref 1.529mb 9.227s 9.271s 9.309s 9.427s 0.072s
ref 1.818mb 0.007s 0.008s 0.008s 0.008s 0.000s
clip 8.516mb 0.073s 0.076s 0.075s 0.077s 0.002s
clipUpper 8.516mb 0.055s 0.056s 0.057s 0.059s 0.002s
clipLower 8.516mb 0.055s 0.058s 0.057s 0.059s 0.002s
joinBelow 4.517mb 0.027s 0.027s 0.027s 0.028s 0.000s
transpose 8.504mb 0.057s 0.057s 0.058s 0.059s 0.001s
joinLeft 4.511mb 0.025s 0.025s 0.026s 0.027s 0.001s
poisson 1.590mb 0.029s 0.029s 0.029s 0.030s 0.000s
gaussian 20.203mb 0.056s 0.056s 0.056s 0.056s 0.000s
randn 1.528mb 0.017s 0.017s 0.017s 0.017s 0.000s
uniform 1.528mb 0.021s 0.021s 0.021s 0.022s 0.000s
multiply 4.507mb 0.042s 0.042s 0.043s 0.045s 0.001s

Previous BenchMark

benchmark subject set revs its mem_peak mode rstdev
eignBench eign 0 1 5 2.699mb 0.309s ±4.51%
svdBench svd 0 1 5 3.604mb 0.148s ±3.60%
poissonMatrixBench poisson 0 1 5 11.738mb 0.105s ±7.07%
gaussianMatrixBench gaussian 0 1 5 11.738mb 0.112s ±17.12%
randMatrixBench randn 0 1 5 1.429mb 0.048s ±2.37%
uniformMatrixBench uniform 0 1 5 1.429mb 0.063s ±8.16%
matrixTransposeBench transpose 0 1 5 8.431mb 0.120s ±1.32%
rrefBench rref 0 1 5 1.501mb 28.513s ±1.90%
refBench ref 0 1 5 1.731mb 0.023s ±7.24%
sumMatrixBench sum 0 1 5 2.434mb 0.051s ±3.59%
matrixPseudoInverseBench inverse 0 1 5 4.775mb 0.222s ±13.76%
matrixInverseBench inverse 0 1 5 1.731mb 0.032s ±127.50%
dotMatrixBench dotMatrix 0 1 5 3.656mb 0.013s ±27.94%
matrixL1NormBench normL1 0 1 10 1.525mb 0.001s ±0.80%
matrixL2NormBench normL2 0 1 10 1.525mb 0.003s ±1.63%

License

The code is licensed MIT and the documentation is licensed CC BY-NC 4.0.

Author

Shubham Chaudhary [email protected]

np's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

Forkers

imvedic

np's Issues

cannot multiply matrix by vector

In matlab/octave, I can define a m x n matrix (3 x 5 in this case) and multiply it by a n x 1 column vector (5 x 1 in this case). It yields a 3x1 column vector:

X = [1 1 1 1 1; 2 2 2 2 2; 3 3 3 3 3]
    X =

       1   1   1   1   1
       2   2   2   2   2
       3   3   3   3   3

octave:427> w = [1;2;3;4;5]
    w =

       1
       2
       3
       4
       5

octave:428> X * w
    ans =

       15
       30
       45

However, ghostjat cannot multiply a 3 x 5 matrix times a vector with size=5:

require __DIR__ . '/np/vendor/autoload.php';
use Np\matrix;
use Np\vector;

$x = Np\matrix::ar([
        [1,1,1,1,1],
        [2,2,2,2,2],
        [3,3,3,3,3]
]);

$w = Np\vector::ar([1, 2, 3, 4, 5]);

$p = $x->dot($w); // throws exception Mismatch Dimensions of given Objects! Obj-A col & Obj-B row amount need to be the same!
$p = $x->multiply($w); // throws exception Mismatch Dimensions of given Objects! Obj-A col & Obj-B row amount need to be the same!

PHP Fatal error: Uncaught FFI\Exception: Failed loading scope 'blas'

I have attempted to run the sample PHP script in your README file and it encounters a fatal error:

PHP Fatal error: Uncaught FFI\Exception: Failed loading scope 'blas' in /Users/sneakyimp/np/vendor/ghostjat/np/src/core/blas.php:28
Stack trace:
#0 /Users/sneakyimp/np/vendor/ghostjat/np/src/core/blas.php(28): FFI::scope('blas')
#1 /Users/sneakyimp/np/vendor/ghostjat/np/src/core/blas.php(73): Np\core\blas::init()
#2 /Users/sneakyimp/np/vendor/ghostjat/np/src/linAlgb/linAlg.php(45): Np\core\blas::gemm(Object(Np\matrix), Object(Np\matrix), Object(Np\matrix))
#3 /Users/sneakyimp/np/vendor/ghostjat/np/src/linAlgb/linAlg.php(30): Np\matrix->dotMatrix(Object(Np\matrix))
#4 /Users/sneakyimp/np/foo.php(8): Np\matrix->dot(Object(Np\matrix))
#5 {main}
thrown in /Users/sneakyimp/np/vendor/ghostjat/np/src/core/blas.php on line 28

Fatal error: Uncaught FFI\Exception: Failed loading scope 'blas' in /Users/sneakyimp/np/vendor/ghostjat/np/src/core/blas.php:28
Stack trace:
#0 /Users/sneakyimp/np/vendor/ghostjat/np/src/core/blas.php(28): FFI::scope('blas')
#1 /Users/sneakyimp/np/vendor/ghostjat/np/src/core/blas.php(73): Np\core\blas::init()
#2 /Users/sneakyimp/np/vendor/ghostjat/np/src/linAlgb/linAlg.php(45): Np\core\blas::gemm(Object(Np\matrix), Object(Np\matrix), Object(Np\matrix))
#3 /Users/sneakyimp/np/vendor/ghostjat/np/src/linAlgb/linAlg.php(30): Np\matrix->dotMatrix(Object(Np\matrix))
#4 /Users/sneakyimp/np/foo.php(8): Np\matrix->dot(Object(Np\matrix))
#5 {main}
thrown in /Users/sneakyimp/np/vendor/ghostjat/np/src/core/blas.php on line 28

I have the FFI extension loaded in PHP. The FFI documentation is sorely incomplete so I'm not at all sure what this scope() call is supposed to do. The docs mention a #define statement, which you appear to have in blas.h:

#define FFI_SCOPE "blas"

EDIT: I'm running this script in PHP 8.1 on MacOs. I have the FFI extension loaded, and brew shows openblas and lapack are installed.

I have also confirmed the error with PHP 8.1 on Ubuntu 20.04 LTS.

matrix::colAsVector function incorrectly using $this->row as multiplier

The colAsVector method clearly has a problem. Some simple code to illustrate:

require __DIR__ . '/vendor/autoload.php';
use Np\matrix;
$v = matrix::ar([
	[1,2,3,4,5,6],
	[7,8,9,10,11,12]
]);
echo $v, "\n";
$shape = $v->getShape();
for($i=0; $i<$shape->n; $i++) {
	$vect = $v->colAsVector($i);
	echo $vect, "\n";
}

The output is clearly wrong, and shows the second item in each column drifting off from the correct value.

Np\matrix
1.000000  2.000000  3.000000  4.000000  5.000000  6.000000  
7.000000  8.000000  9.000000  10.000000  11.000000  12.000000  

Np\vector
1.000000  3.000000  

Np\vector
2.000000  4.000000  

Np\vector
3.000000  5.000000  

Np\vector
4.000000  6.000000  

Np\vector
5.000000  7.000000  

Np\vector
6.000000  8.000000  

I believe this modified version of the function may remedy the problem:

    /**
     * Return a col as vector from the matrix.
     * @param int $index
     * @return \Np\vector
     */
    public function colAsVector(int $index): vector {
        $vr = vector::factory($this->row);
        for ($i = 0; $i < $this->row; $i++) {
            $vr->data[$i] = $this->data[($i *  $this->col) + $index];
        }
        return $vr;
    }

composer complains about desired stability

The composer command you offer in the README file:

composer require ghostjat/np

Results in a complaint:

Could not find a version of package ghostjat/np matching your minimum-stability (stable). Require it with an explicit version constraint allowing its desired stability.

It may help to specify a slightly different composer require command:

composer require ghostjat/np:dev-main

where you can specify one of your branches (e.g., v0.0-alpha or np-0.0.1-alpha).

linAlg::dot() method has return type too narrowly declared

The function linAlg::dot needs to have its return type changed to include float and possible int and/or other scalar types. It is possible to multiply two vectors and get a scalar result. E.g., this code in matlab/octave returns a scalar, and blas returns a float:

octave:25> m1 = [1 2 3]
m1 =

   1   2   3

octave:26> m2 = [1;2;3]
m2 =

   1
   2
   3

octave:27> m1 * m2
ans = 14

This code should return 14:

$m1 = Np\vector::ar([1, 2, 3]);
echo "$m1\n";
$m2 = Np\vector::ar([1, 2, 3]);
echo "$m2\n";

$v = $m1->dot($m2);
echo "$v\n";

but due to the narrow return type restriction in linAlgb::dot, it throws this error:

PHP Fatal error:  Uncaught TypeError: Np\vector::dot(): Return value must be of type Np\matrix|Np\vector, float returned in /Users/sneakyimp/Desktop/biz/machine-learning/np/vendor/ghostjat/np/src/linAlgb/linAlg.php:34

Simply modifying the linAlg::dot function as follows should fix this particular error:

    /**
     *
     * get dot product of m.m | m.v | v.v
     *
     * @param \Np\matrix|\Np\vector $d
     * @return matrix|vector|float
     */
    public function dot(matrix|vector $d): matrix|vector|float {
        if ($this instanceof matrix) {
            if ($d instanceof matrix) {
                return $this->dotMatrix($d);
            }
            return $this->dotVector($d);
        }
        return blas::dot($this, $d);
    }

Np\vector::sum incorrectly returns sum of absolute values

It appears that the vector::sum function incorrectly calls blas::asum, which I believe returns the sum of absolute values instead of simply the sum of the values in a vector.

You can see the problem demonstrated with this short script:

<?php

require_once 'vendor/autoload.php';

Np\core\blas::$ffi_blas = FFI::load(__DIR__ . '/vendor/ghostjat/np/src/core/blas.h');


$v = Np\vector::ar([1, 2, 3]);
// this correctly returns 6
var_dump($v->sum());


$v = Np\vector::ar([-1, -2, -3]);
// this INCORRECTLY returns 6
var_dump($v->sum());

I don't know if there is a BLAS or LAPACK function optimized to return the correct value, but I suggest we modify the vector.php source code to change this:

     /**
     * The sum of the vector.
     * @return float
     */
    public function sum(): float {
        return blas::asum($this);
    }

to this:

    /**
     * The sum of the elements of the vector.
     * @return float
     */
    public function sum(): float {
        $sum = 0;
        for($i=0; $i<$this->ndim; $i++) {
                $sum += $this->data[$i];
        }
        return $sum;
    }

    /**
     * The sum of the absolute values of the elements of the vector.
     * @return float
     */
    public function sumAbs(): float {
        return blas::asum($this);
    }

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.