Coder Social home page Coder Social logo

matrix's Introduction

Gonum

Build status Build status codecov.io go.dev reference GoDoc Go Report Card stability-unstable

Installation

The core packages of the Gonum suite are written in pure Go with some assembly. Installation is done using go get.

go get -u gonum.org/v1/gonum/...

Supported Go versions

Gonum supports and tests using the gc compiler on the two most recent Go releases on Linux (386, amd64 and arm64), macOS and Windows (both on amd64).

Note that floating point behavior may differ between compiler versions and between architectures due to differences in floating point operation implementations.

Release schedule

The Gonum modules are released on a six-month release schedule, aligned with the Go releases. i.e.: when Go-1.x is released, Gonum-v0.n.0 is released around the same time. Six months after, Go-1.x+1 is released, and Gonum-v0.n+1.0 as well.

The release schedule, based on the current Go release schedule is thus:

  • Gonum-v0.n.0: February
  • Gonum-v0.n+1.0: August

Build tags

The Gonum packages use a variety of build tags to set non-standard build conditions. Building Gonum applications will work without knowing how to use these tags, but they can be used during testing and to control the use of assembly and CGO code.

The current list of non-internal tags is as follows:

  • safe — do not use assembly or unsafe
  • bounds — use bounds checks even in internal calls
  • noasm — do not use assembly implementations
  • tomita — use Tomita, Tanaka, Takahashi pivot choice for maximimal clique calculation, otherwise use random pivot (only in topo package)

Issues TODOs

If you find any bugs, feel free to file an issue on the github issue tracker. Discussions on API changes, added features, code review, or similar requests are preferred on the gonum-dev Google Group.

https://groups.google.com/forum/#!forum/gonum-dev

License

Original code is licensed under the Gonum License found in the LICENSE file. Portions of the code are subject to the additional licenses found in THIRD_PARTY_LICENSES. All third party code is licensed either under a BSD or MIT license.

Code in graph/formats/dot is dual licensed Public Domain Dedication and Gonum License, and users are free to choose the license which suits their needs for this code.

The W3C test suites in graph/formats/rdf are distributed under both the W3C Test Suite License and the W3C 3-clause BSD License.

matrix's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

matrix's Issues

DenseMatrix Row() & Col()

  1. Why there is RowView but no ColView?
a := mat64.NewDense(2, 3, []float64{
     1, 2, 3,
     2, 3, 4,
})

fmt.Print(a.Row(nil, 0)) works. But fmt.Print(a.Col(nil, 0)) complains:

panic: mat64: no blas engine registered: call Register()

goroutine 16 [running]:
runtime.panic(0xc21e0, 0x208196180)
        /usr/local/Cellar/go/1.3.1/libexec/src/pkg/runtime/panic.c:279 +0xf5
github.com/gonum/matrix/mat64.(*Dense).Col(0x2081ba180, 0x208196170, 0x2, 0x2, 0x0, 0x0, 0x0, 0x0)
        /Users/lazywei/src/github.com/gonum/matrix/mat64/dense.go:127 +0x156
main.main()
        /Users/lazywei/src/github.com/lazywei/adaboost/main.go:15 +0xbc

goroutine 17 [runnable]:
runtime.MHeap_Scavenger()
        /usr/local/Cellar/go/1.3.1/libexec/src/pkg/runtime/mheap.c:507
runtime.goexit()
        /usr/local/Cellar/go/1.3.1/libexec/src/pkg/runtime/proc.c:1445

goroutine 18 [runnable]:
bgsweep()
        /usr/local/Cellar/go/1.3.1/libexec/src/pkg/runtime/mgc0.c:1976
runtime.goexit()
        /usr/local/Cellar/go/1.3.1/libexec/src/pkg/runtime/proc.c:1445

goroutine 19 [runnable]:
runfinq()
        /usr/local/Cellar/go/1.3.1/libexec/src/pkg/runtime/mgc0.c:2606
runtime.goexit()
        /usr/local/Cellar/go/1.3.1/libexec/src/pkg/runtime/proc.c:1445
exit status 2

I thought Row and Col should be similar. Why one works but another fails?

loss of floating point precision

I have written two variants of a function to compute the matrix exponential one using gonum/matrix package and another using hrautila/matrix and hrautila/linalg for the blas stuff. The gonum implementation is given below which is followed by the hrautila/linalg implementation:

    func ExpM(M *gnm.Dense) *gnm.Dense {
            nrows, _ := M.Dims()
            const (
                    acc = 10.0

                    // Scaling
                    N = 3.0
            )

            var (
                    msmall = new(gnm.Dense)
                    temp   = new(gnm.Dense)
                    res    = gnm.Eye(nrows)

                    emchan  = make(chan *gnm.Dense, 1)
                    sqmchan = make(chan *gnm.Dense, 1)
            )

            // M*(1/(2^3))
            // N controls accuracy of exponentiation
            msmall.ScaleDense(1.0/math.Pow(2, N), M)

            // Exponentiation Part
            fact_i := 0.0
            exp := func(i float64) {
                    fact_i = FactorialMemoized(i)
                    temp.Scale(1.0/fact_i, msmall)
                    res.Add(temp, res)
                    msmall.Mul(msmall, msmall)
                    emchan <- res
            }

            // Squaring Part
            sqm := func(r *gnm.Dense) {
                    for i := 0; i < N; i++ {
                            r.Mul(r, r)
                    }
                    sqmchan <- r
            }

            for i := 0.0; i < acc; i++ {
                    go exp(i)
            }

            select {
            case r := <-emchan:
                    {
                            go sqm(r)
                            select {
                            case retm := <-sqmchan:
                                    {
                                            return retm
                                    }
                            }
                    }
            }
            return nil
    }

    // hrautila/linalg implementation
    func MExp(M *gnm.Dense) *gnm.Dense {
            nrows, ncols := M.Dims()
            nelem := len(M.Data())

            mchan := make(chan *mx.FloatMatrix, 1)

            go func(M *gnm.Dense) {
                    mchan <- mx.FloatNew(nrows, ncols, M.Data())
            }(M)

            MHere := <-mchan

            const (
                    // Accuracy
                    acc = 10.0
                    // Scaling
                    N = 3.0
            )

            var (
                    MPower  = mx.FloatZeros(nrows, ncols)
                    MPower1 = mx.FloatZeros(nrows, ncols)
                    MExp1   = mx.FloatZeros(nrows, ncols)
                    MExp2   = mx.FloatZeros(nrows, ncols)
            )

            // again N control accuracy of exponentiation
            MSmall := MHere.Scale(1 / math.Pow(2.0, N))

            // Exponentiation Part
            R := mx.FloatIdentity(nrows)

            if err := blas.CopyFloat(MSmall, MPower); err != nil {
                    panic(err)
            }

            tmpM1 := make([]float64, nelem)
            MPowerArr := MPower.FloatArray()

            factorial_i := 1.0
            for i := 1.0; i < acc; i++ {
                    factorial_i = FactorialMemoized(i)
                    // factorial_i = factorial_i * i

                    for i := 0; i < nelem; i++ {
                            tmpM1[i] = MPowerArr[i] / factorial_i
                    }

                    TempMat := mx.FloatNew(nrows, ncols, tmpM1)
                    R = R.Plus(TempMat)

                    if err := blas.CopyFloat(MPower, MPower1); err != nil {
                            panic(err)
                    }

                    if err := blas.GemmFloat(MPower1, MSmall, MPower, 1.0, 0.0, lg.OptNoTrans, lg.OptNoTrans); err != nil {
                            panic(err)
                    }
            }

            // Squaring Step
            for i := 0.0; i < N; i++ {
                    if err := blas.CopyFloat(R, MExp1); err != nil {
                            panic(err)
                    }
                    if err := blas.CopyFloat(R, MExp2); err != nil {
                            panic(err)
                    }
                    if err := blas.GemmFloat(MExp1, MExp2, R, 1.0, 0.0, lg.OptNoTrans, lg.OptNoTrans); err != nil {
                            panic(err)
                    }
            }

            // rm, _ := gnm.NewDense([][]float64{R.GetRowArray(i, vals)})
            return gnm.NewDense(R.Rows(), R.Cols(), R.FloatArray())
    }

Both of them are doing the same thing, except the hrautila/linalg implementation is almost carbon copy of a similar cpp implementation. Input matrix is:

    &{{3 3 3 [0 -0.7853981633974483 0 0.7853981633974483 0 -0 -0 0 0]}}

The results are as follows, the first one is gonum/matrix implementation, followed by the hrautila/linalg implementation,

    225.399us
    &{{3 3 3 [0.6773165280053692 -0.6841569732971127 0 0.6841569732971127 0.6773165280053692 0 0 0 1]}}
    271.354us
    &{{3 3 3 [0.7071067811865479 -0.7071067811865477 0 0.7071067811865477 0.7071067811865479 0 0 0 1]}}

The hrautila implementation is the correct answer while the gonum one is having precision issues. However, change the N=12.0 in the gonum implementation, the results are as follows:

    120.982us
    &{{3 3 3 [0.7070535250738923 -0.7070535522971928 0 0.7070535522971928 0.7070535250738923 0 0 0 1]}}
    137.803us
    &{{3 3 3 [0.7071067811865479 -0.7071067811865477 0 0.7071067811865477 0.7071067811865479 0 0 0 1]}}

mat64: SVD cannot operate on wide matrices

The original JAMA code that SVD was ported from does not handle wide matrices, so we have the same problem. The failure is not strictly based on matrix dimension; there is a potential failing case in the tests marked FIXME(kortschak).

mat64: Feature request for variadic matrix multiplication

This is a feature request for

func (m *Dense) MulBatch(a ...Matrix)

The computation time for matrix multiplication can be significantly faster depending on the ordering. For example, multiplying

(n, 1) x (1, n) x (n, n)

It's much faster to do the (1,n) x (n,n) first, and then multiply the final two vectors, then to do the vector multiplication and then multiply the two (n,n) matrices. This problem can be solved through dynamic programming. In gonum, we can do even better than this because of the Matrix interface. We can use reflection to see the underlying Matrix type (SymDense/BandPacked/etc.) and use the order of operations counts from the multiplication. This allows automatic efficient multiplication order of arbitrary types (pretty neat)

We probably want the function signature to pair a matrix type with a "transpose", as complicated matrix multiply expressions almost always have them.

use does not correctly return zeroed data slices

The use helper tries to return an adequately sized data slice if one is available from the given []float64. It does not zero those elements on the basis that they will be overwritten before being read. This assumption is not valid for some calls to BLAS routines, e.g. Dgemm calls will add to the values in the recipient. This was identified in #63, but probably impacts other places (Mul and the new MulTrans e.g.).

Probably the best approach is to zero prior to these BLAS calls.

Note that it's probably worthwhile figuring out why the cblas implementation does not fail due to this.

Add discussion of using matrices and views

It is probably worth adding a discussion on how to be a good citizen when it comes to manipulating matrix data directly. As far as we're concerned, it's definitely discouraged to get the underyling matrix directly, but we allow it for a number of good reasons. Developers of (especially low level) packages that take a Dense are likely to do so as well. We should talk about the correct way to do so, for example by showing the correct way to add all of the elements of a matrix is by looping over the rows and not by doing floats.Sum on the data slice itself.

Build status badge is misleading

The build status badge shows red when a PR fails, even when that PR has not been merged and the master is actually passing. I don't think this is the correct behaviour.

mat64: add Tranpose type

In #121 I proposed a new type Transpose.

I suggest we add a Transpose struct { Matrix } with a Dims and At method that transposes the embedded matrix. In itself, this is not wildly useful, but it can be used internally to signal the transpose state to the Mul routine. This allows us to retire the MulTrans API, keeping its optimisations (we could keep it, but this is worth thinking about).

Identifying the transpose state would be done by a helper that walks the T chain until the underlying Dense (or other optimisable type) is found, keeping track of the state. Additionally, I think a helper T(Matrix) Matrix would be worthwhile to optimise this process:

func T(m Matrix) Matrix {
    if t, ok := m.(Transpose); ok {
        return t.Matrix
    }
    return Transpose{m}
}

ElemDiver and Diager Interfaces

It would be nice to have an element divide function and diagonal function for matrix. I'm thinking of using them for a CorrelationMatrix function in the stat package, which could then just use

func CorrelationMatrix(m mat64.Matrix, ... ) mat64.Dense {
   c := CovarianceMatrix(m, ...)
   d := cov.Diag()
   // take a square root of d here

   // turn d back into a matrix here?
   dt := d.TCopy()

   w := new(mat64.Dense)
   w.Mul(d, dt)
   c.ElemDiv(c, w)
   return c
}

Chaining for Arithmetic like multiplication, addition of matrices?

Hey guys,
I wanted to know, if it's possible to make chaining available to multiplication and addition of matrices?
It would be a lot helpful in use cases such as:

    A6s = A6*(c[13]*A6 + c[11]*A4 + c[9]*A2)
    c[i] == constant and A6, A4, A2 are matrices

    U = A * (A6s + c[7]*A6 + c[5]*A4 + c[3]*A2 + c[1]*I)

Such situations, chaining makes it much more easy on writing code.

mat64: capRows and capCols are not correctly handled in arithmetic reuse cases

When m.isZero() == true, a call to use is made. This call may result in the allocation of a new data slice, invalidating the previous cap values. We don't handle this at all. The simplest case of this is when a Dense{} is used; since the data slice is nil, a new data slice is allocated, but (AFAICS) in no case is capRows or capCols set to reflect that this matrix now has a capacity.

The best approach I think is to factor the m.isZero == true consequences out into a separate method, func (m *Dense) make(r, c int, zero bool).

This arose in discussion in #114.

Dense Matrix multiplication

Hi,

When I tried to multiply one matrix to another one,

    camMat.Mul(camMat, stackedMat)

(both are dense matrix), it complains:

cannot use stackedMat (type mat64.Dense) as type mat64.Matrix in argument to camMat.Mul:
    mat64.Dense does not implement mat64.Matrix (At method has pointer receiver)

I'm not sure why ... what is the correct way to do matrix multiplication then?

mat64: use() and *Dense.View() don't play well together

Consider the following code:

package main

import (
    "fmt"

    "github.com/gonum/matrix/mat64"
)

type fm struct {
    mat64.Matrix
}

func (m fm) Format(fs fmt.State, c rune) {
    if c == 'v' && fs.Flag('#') {
        fmt.Fprintf(fs, "%#v", m.Matrix)
        return
    }
    mat64.Format(m.Matrix, 0, '.', fs, c)
}

func main() {
    a := mat64.NewDense(4, 4, []float64{
        1, 2, 3, 4,
        5, 6, 7, 8,
        9, 10, 11, 12,
        13, 14, 15, 15,
    })
    v := a.View(1, 1, 1, 1).(*mat64.Dense)
    v.Reset()

    b := mat64.NewDense(3, 3, []float64{
        1, 2, 3,
        4, 5, 6,
        7, 8, 9,
    })

    fmt.Printf("a' before =\n%v\n", fm{a})

    v.Add(b, b)

    fmt.Printf("v' =\n%v\n", fm{v})

    fmt.Printf("a' after =\n%v\n", fm{a})
}
$ go run useView.go 
a' before =
⎡ 1   2   3   4⎤
⎢ 5   6   7   8⎥
⎢ 9  10  11  12⎥
⎣13  14  15  15⎦
v' =
⎡ 2   4   6⎤
⎢ 8  10  12⎥
⎣14  16  18⎦
a' after =
⎡ 1   2   3   4⎤
⎢ 5   2   4   6⎥
⎢ 8  10  12  14⎥
⎣16  18  15  15⎦

We would expect that elements of a outside the view v would be unaffected by the add operation. However, the combination of View, Reset and use (internally) result in reuse of those elements illegitimately.

There are five obvious solutions, though the problem is subtle.

  1. Document that they don't interact well and deal with queries on gonum-dev and issues here.
  2. Replace use with make and lose slice reuse.
  3. Make Reset re-make the slice (essentially the same as 2.).
  4. Increase the logic complexity of use as described below.
  5. Increase the complexity of the *Dense struct to include 2 dimensions of cap as described in the tables proposal by @btracey. (Left as an exercise for the reader).
func use(m blas64.General, i, j, r, c, s int) blas64.General {
    var data []float64
    if s != m.Stride || r > m.Rows || c > m.Cols {
        data = make([]float64, r*c)
    } else {
        data = m.Data[i*s+j : (i+r-1)*s+(j+c)]
    }
    return blas64.General{
        Rows:   r,
        Cols:   c,
        Stride: s,
        Data:   data,
    }
}

This depends on the fact that Reset does not alter the Stride field and means that we cannot recover quite as many data slices as previously (option 5 would be better with that).

TestSetRowColumn failure

On my machine (OSX 10.9) the TestSetRowColumn tests fail. The errors are off by a small floating point error (probably 1 UDP). The check.Equals should be changed to some form of EqualsApprox

Consider changing Viewer interface and having Dense.View return a Matrix

Right now the signature is
type Viewer interface {
View(a Matrix, i, j, r, c int)
}

In general, this is impossible to satisfy. There is no way a matrix can become a view of an arbitrary matrix. For example, the current Dense implementation will panic if a is anything but a *Dense. Instead, I think it should be

// Viewer returns a view of itself
type Viewer interface{
View(i, j, rows, cols) Matrix
}
Since a given matrix knows how it works, it can return a view of itself.

This has the added bonus of being useful for chaining, for example
b.Copy(a.View(0,0, 10, 15))
which at the moment is difficult/tedious to accomplish if you don't want to change a.

The number of interfaces cause many edge cases

This is following up on discussion in gonum/stat#19

As currently defined, the interfaces in mat64/ make it so that you might get a Matrix which doesn't implement various matrix operations (such as Col, Row, Add, Mul, ...) which makes implementing functions which accept a Matrix full of edge cases. A caller has to see if the input Matrix implements arithmetic (for example) and then perhaps have a fallback, and another fallback, and so on. It makes it difficult to implement and test.

In addition, many of the methods panic when there is no blas engine. We should probably register goblas by default when it is ready, which will reduce the dimensions of the possible interfaces.

One way to do this is to make methods like Row, Col, Add, Mul, etc. have sigs like

Col(x Matrix, i int) z []float64 { ... }

Which would then chose from the available interfaces of x and y to produce a result. For example, if x implements Vectorer, then it could just call x.Col(i), otherwise, if it implements RawMatrixer, then it could just call RawMatrix() and pull the values out at the appropriate indexes. If that is not available, then it could call At.

Alternatively, all of the interfaces could be folded into Matrix, so that all Matrixes would have to implement Col, and Row, and etc. That would be easier if goblas was always on.

go test produces error on OSX10.9 -framework Acellerate: Idamax NaN & NaNInc

$ go test
--- FAIL: TestIdamax (0.00 seconds)
level1double.go:1155: idamax: mismatch NaN: expected 0, found 1
level1double.go:1155: idamax: mismatch NaNInc: expected 0, found 1
FAIL
exit status 1
FAIL github.com/gonum/blas/cgo 0.063s

Environment:
Mac OSX10.9 CGO_LDFLAGS="-framework Accelerate"

Behaviour:
commenting out the test (or altering the expected value to 1) produces NO errors

Comment:
see nothing going wrong in the implementation: might be an error in Apples framework. Maybe should be announced to user at test time and maybe should be handled by an if clause on darvin?

Removal of func realloc

In C/C++ realloc() means expanding / limiting the data size. But if it expands, there is new allocated memory but also the data is copied to the new memory.

In func realloc() in matrix.go there is no copying of data. This makes it ambigious because the user would expect the same behaviour as with C/C++. But also it is only used two times and both of the times a normal make() would do the trick. So IMO func realloc() could be removed (or at least updated so that the data is copied).

Matrix Exponential function - benchmark

System Specs:

    CPU: Intel Core 2 Duo E8400 @ 3.00 GHz (2 CPUs)
    RAM: 4096MB
    OS: Xubuntu 14.04

Input matrix is:

    100x100 Identity matrix scaled by pi/4

Benchmark runs concurrent implementation first followed by the serial implementation. To add to the relative performance, added the hrautila/linalg serial implementation as well.
Exponential of the input matrix is computed a 100 times by each implementation.
The whole bench is run 10 times.
These are the results obtained:

    ExpMC (Concurrent): 216.222009ms
    ExpMS (Serial): 173.153952ms
    MExp (Hrautila): 588.374985ms 

    ExpMC (Concurrent): 180.777765ms
    ExpMS (Serial): 179.983113ms
    MExp (Hrautila): 590.064161ms 

    ExpMC (Concurrent): 180.441359ms
    ExpMS (Serial): 184.183137ms
    MExp (Hrautila): 593.176985ms 

    ExpMC (Concurrent): 214.912445ms
    ExpMS (Serial): 175.590532ms
    MExp (Hrautila): 598.483613ms 

    ExpMC (Concurrent): 181.679134ms
    ExpMS (Serial): 190.575331ms
    MExp (Hrautila): 623.571551ms 

    ExpMC (Concurrent): 174.305615ms
    ExpMS (Serial): 196.449821ms
    MExp (Hrautila): 626.91411ms 

    ExpMC (Concurrent): 172.044364ms
    ExpMS (Serial): 177.37516ms
    MExp (Hrautila): 639.042921ms 

    ExpMC (Concurrent): 178.190162ms
    ExpMS (Serial): 176.627894ms
    MExp (Hrautila): 654.692701ms 

    ExpMC (Concurrent): 178.382144ms
    ExpMS (Serial): 183.512952ms
    MExp (Hrautila): 618.775386ms 

    ExpMC (Concurrent): 186.254995ms
    ExpMS (Serial): 186.004201ms
    MExp (Hrautila): 679.765778ms

mat64: Exp does not respect views

Exp calls m.Mul(m, m) which will currently result in reallocation of the backing data. The multiply should be done in a workspace and copied to m at return.

Consider making solve return an error on singular matrix

It's generally hard to know if a matrix is singular before using it in a call to solve (because in many cases, the matrix is really only 'almost singular', but floating point precision gets in the way). As a result, it feels like Solve should return an error rather than panic on a singular matrix.

No precise license is given

Hi,

Could you please put a precise license under which this project is licensed? Right now it says only "BSD-style", but it's not quite clear which exactly one is used.

Thank you

mat64: consider retiring less used interface definitions

The interface definitions in matrix.go served a useful purpose in defining the direction of development for mat64. We now have a fairly well defined set of interfaces based on the concrete types, so with the exception of interfaces that are used within the package or are likely to be useful to clients we should start considering retirement for these.

Please add you thoughts on which should be retired and which should stay.

Problems when a Dense/Vector is a receiver and an argument

See discussion in #92

At the moment, this is a problem throughout Dense and in Vector.

When it is the same, new memory is allocated to avoid shadowing. This new data is used in place of the old matrix. There are two problems with the current approach

  1. (Relatively easy): In the code, there is no check that the receiver is the correct size. For example, if we have a.Mul(a, b), a is successfully overwritten even if b is not an nxn matrix. This is contrary to the current policy that sizes are only fungible when the matrix is zero.
  2. (Harder). This behavior also does not behave well with views. Instead of the new allocated memory replacing of the old data, in cases where a is a subset of a different matrix, the new data should be copied into the old locations.

Cholesky should input Matrix

Cholesky Factor should take in a Matrix rather than a Dense. Significant improvement can happen when the matrix is symmetric for example.

matrix: propose add cmat128 package

At this stage it would be a stripped down complex128 version of the mat64 interface definitions (probably just the basic arithmetic operations - maybe with their implementations via cgo in the first instance) and a Dense representation. With a view to add those operations in the future and significantly to allow us to have an available representation for results from some of the lapack routines that generate complex results on some inputs.

Should we use goblas instead of cblas for tests?

Currently, running tests on the matrix\mat64 package requires an installed cblas library. Now that @btracey has completed a pure go blas implementation in blas\goblas, we may want to change the tests to use goblas, so that users will be able to run go test without installing cblas.

This would be particularly useful for travis-ci, which currently takes ~10m to run, almost all of which is spent making OpenBLAS, which could be reduced to something like a minute. There is also an intermittent error that occurs during travis builds, which happens when it fails to download the netlib cblas header.

All of the blas libraries should produce the same output (ha, ha), and if they don't then the appropriate place to test is probably in the blas package instead of here. Benchmarks may still be interesting though.

Surprising replacement of matrix data

In the following code:

base := make([]float64, n*n)
for i := range base {
    base[i] = rand.Float64()
}
b := NewDense(n, n, base)
b.MulTrans(bm, true, bm, false)

The changes to b are not reflected in base. This is surprising. New data needs to be allocated in the interim to avoid races, but the final result should probably be copied into the data rather than being replaced with a new slice.

Cholesky take in Symmetric

Thinking more on it, I think Cholesky should take in a Symmetric instead of a SymDense. We can do the CopySym ourselves into a SymDense for now, and use some type switching for symmetric banded and diagonal when we have them. I can make the PR if acceptable.

CholeskyFactor answer should be Triangular

The point of Cholesky is that it decomposes the matrix into an upper and lower triangular matrix. Thus, the L field should be Triangular rather than Dense. This is especially important as the next step is often to do a Solve, which is much more efficient using a triangular solve rather than a general linear solve.

Transpose Multiply Operation

There are many cases where we want to multiply a transpose of a matrix by another. For example, in the stat.CovarianceMatrix function, the covariance matrix is calculated by first centering the input, making a transpose copy of it, and then matrix multiplying the original by the copy. It would be more memory and time efficient if we could avoid the transpose copy. I think we should have an additional method of Dense which would be able to express that kind of transpose multiply operation.

BLAS usually allows the caller to indicate if an input matrix is transpose via an additional input argument - see for example DGEMM and DSYRK. In R, the same effect is produced by using tcrossprod (http://stat.ethz.ch/R-manual/R-patched/library/Matrix/html/tcrossprod.html). I'm not sure what MATLAB does behind the scenes, but I strongly suspect that the jitter recognizes these operations, or maybe they encode transpose via a flag in the matrix objects themselves. I believe we've already discussed that approach and came out against it.

That leaves two ways to do this - we can either have a more general multiply operation, which would have additional inputs to indicate if the first and/or second matrix was a transpose, following the function signature of BLAS. Alternatively, we could have an additional, specific TMul function, which would only perform A * B'. (Or, perhaps a TMul which indicates A' * B, MulT which indicates A * B', and TMulT which indicates A' * B').

An additional possibility is that in the cases where multiply operations are taking place between two matrices and one of them is nil, we could take that to mean that the nil argument is the same as the other argument. For example, a "MulT" from above, with m.MulT(A, nil) would mean that A * A' should be stored into m. In this way we could (at some point) take advantage of the symmetry of the operation.

What are your thoughts? I think I prefer more simple methods rather than fewer complicated methods, but the main thing is to be able to express these relationships.

cmat128 support missing ?

Hello,

I found several references to complex128 matrix package, but in the tree I can find only mat64.
That support has been dropped ?

thanks,
Fausto

what do you think about these functions?

    // https://github.com/skelterjohn/go.matrix
    // Create a Zero matrix
    func Zeros(r, c int) *Dense {
            A := new(Dense)
            A.mat = RawMatrix{
                    Data:   make([]float64, r*c),
                    Rows:   r,
                    Cols:   c,
                    Stride: c,
            }
            return A
    }

    // https://github.com/skelterjohn/go.matrix
    // Create an identity matrix
    func Eye(span int) *Dense {
            A := Zeros(span, span)
            for i := 0; i < span; i++ {
                    A.Set(i, i, 1)
            }
            return A
    }

    // Can use the View function provided, but then, it needs an extra allocation for receiver in some cases
    // and also doesn't allow chaining.
    // Probably should go into the Matrix interface
    // https://github.com/skelterjohn/go.matrix
    // Get Sub Matrix of A of size r,c starting at i,j
    func (A *Dense) GiveSubMatrix(i, j, r, c int) *Dense {
            B := new(Dense)
            str := A.mat.Stride
            B.mat = RawMatrix{
                    Data:   A.mat.Data[i*str+j : (i+(r-1))*str+(j+c)],
                    Rows:   r,
                    Cols:   c,
                    Stride: c,
            }
            return B
    }

    // https://github.com/skelterjohn/go.matrix
    // Can be extracted by creating extracting a RawMatrix and then taking the element array
    // but that gets repetitive very quickly.
    func (A *Dense) Data() []float64 {
            if A.mat.Stride == A.mat.Rows {
                    return A.mat.Data[0 : A.mat.Rows*A.mat.Cols]
            }
            a := make([]float64, A.mat.Rows*A.mat.Cols)
            for i := 0; i < A.mat.Rows; i++ {
                    for j := 0; j < A.mat.Cols; j++ {
                            a[i*A.mat.Cols+j] = A.mat.Data[i*A.mat.Stride+j]
                    }
            }
            return a
    }

Panic when call 1-norm

vectorX := mat64.NewDense(3, 1, []float64{2, 2, 3})
vectorY := mat64.NewDense(3, 1, []float64{1, 4, 5})
subVector := mat64.NewDense(0, 0, nil)
subVector.Sub(vectorX, vectorY)

result := subVector.Norm(1)

gives

goroutine 4 [running]:
runtime.panic(0x1465e0, 0x21043dba0)
    /usr/local/Cellar/go/1.2.1/libexec/src/pkg/runtime/panic.c:248 +0x106
github.com/gonum/matrix/mat64.(*Dense).Col(0x210417e70, 0x210441980, 0x3, 0x3, 0x0, ...)
    /Users/lazywei/GoProjects/golearn/src/github.com/gonum/matrix/mat64/dense.go:127 +0x12d
github.com/gonum/matrix/mat64.(*Dense).Norm(0x210417e70, 0x3ff0000000000000, 0x210417db0)
    /Users/lazywei/GoProjects/golearn/src/github.com/gonum/matrix/mat64/dense_arithmetic.go:58 +0x108
github.com/sjwhitworth/golearn/metrics/pairwise.(*Manhattan).Distance(0x33a8e8, 0x210417db0, 0x210417de0, 0x210402a00)

Change signature of Vec.Mul

It's not true that a Vec can result from multiplying two general matrices. A vec can only result from multiplying a Vector with a matrix. I vote we reflect this in the signature, and instead make it

func (v *Vector) Mul(a Matrix, trans bool, b *Vec)

This signature maps very nicely to D*mv calls.

special case - multiplication results in zero matrix every time

Here's what I did:

  1. Create a new directory in my $HOME
  2. Make it the $GOPATH
  3. Create a new directory inside it and paste two files expm.go, expm_test.go
  4. go get gonum/matrix and gonum/blas
  5. install gonum/blas by editing the "cblas/genBlas.pl" file as follows:
    my $cblasHeader = "cblas.h";
    my $LIB = "/usr/lib/"; // add this line
    ..... // some lines of code regarding "my %done"...
    my $atlas = "";
    if ($excludeAtlas) {
        $done{'cblas_csrot'} = 1;
        $done{'cblas_zdrot'} = 1;
    } else {
           $atlas = " -latlas";
    }
    ..... some comments...

Then add this to the line above of this line -> #include "${cblasHeader}"

    #cgo linux LDFLAGS: -L/usr/lib/ -lcblas
  1. This will install the blas package and create the binary, if this is not done, then the install fails with "ld returned 1".

  2. I uninstalled and re-installed libatlas, libblas and liblapack as well to no success.

  3. Anyways, once this is done, I created a directory called "main" in "$GOPATH/" and created a file called "main.go".

  4. Then the code is as follows:

    package main
    
    import (
            "expm"
            "math"
    )
    
    func iden(span int) *expm.Mat {
            d := make([]float64, span*span)
            for i := 0; i < span*span; i += span + 1 {
                    d[i] = 1
            }
            return expm.NewMat(span, span, d)
    }
    func skewMatrix(m *expm.Mat) *expm.Mat {
            v := m.RawMatrix().Data
            return newStackedDense([][]float64{
                    {0, -v[2], v[1]},
                    {v[2], 0, -v[0]},
                    {-v[1], v[0], 0}})
    }
    func newStackedDense(mv [][]float64) *expm.Mat {
            r := len(mv)
            c := len(mv[0])
            d := make([]float64, r*c)
            for i := 0; i < r; i++ {
                    for j := 0; j < c; j++ {
                            d[i*c+j] = mv[i][j]
                    }
            }
            return expm.NewMat(r, c, d)
    }
    
    func main() {
            a := skewMatrix(expm.NewMat(3, 1, []float64{0, 0, 1}))
            a.Scale(math.Pi/4, a)
            m := new(expm.Mat)
            id := iden(3)
    
            m.ExpMS(a, id)
            fmt.Println("goblas Engine = m:", m)
    }
    

Results in:

    goblas Engine = m: &{{{3 3 3 [0 -0 0 0 0 0 0 0 0]}}}

While when I use cblas Engine, I get the same

    cblas Engine = m: &{{{3 3 3 [0 0 0 0 0 0 0 0 0]}}}

I got this zero matrix even before I fiddled around with the "genBlas.pl" file, I registered with "goblas.Blas{}". After fiddling, I tried both the registration, i.e, goblas and cblas and still got the same error.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.