HMLP: High-performance Machine Learning Primitives
hmlp::Regression< T > Class Template Reference

Public Member Functions

 Regression (size_t d, size_t n, Data< T > *X, Data< T > *Y)
 
Data< T > Ridge (kernel_s< T > &kernel, size_t niter)
 : Support SVD More...
 
Data< T > Lasso (kernel_s< T > &kernel, size_t niter)
 
Data< T > SoftMax (kernel_s< T > &kernel, size_t nclass, size_t niter)
 
Data< T > Solve (kernel_s< T > &kernel, size_t niter)
 gradient descent More...
 

Member Function Documentation

template<typename T >
Data<T> hmlp::Regression< T >::Lasso ( kernel_s< T > &  kernel,
size_t  niter 
)
inline

end Ridge()

template<typename T >
Data<T> hmlp::Regression< T >::Ridge ( kernel_s< T > &  kernel,
size_t  niter 
)
inline

: Support SVD

Linear ridge regression

XXt + lambda * I

XY

W = ( XXt + lambda * I )^{-1} * XY

template<typename T >
Data<T> hmlp::Regression< T >::SoftMax ( kernel_s< T > &  kernel,
size_t  nclass,
size_t  niter 
)
inline

end Lasso()

create a kernel matrix

create a simple GOFMM compression

P = KW

P = KW

template<typename T >
Data<T> hmlp::Regression< T >::Solve ( kernel_s< T > &  kernel,
size_t  niter 
)
inline

gradient descent

w += (-1.0 / n) * K(Kw + b - Y + lambda * w) b += (-1.0 / n) * (Kw + b - Y)

create a kernel matrix

create a simple GOFMM compression

( K + lambda ) * W - Y + B

Kw + B - Y

update B = (-alpha / n) * ( Kw + B - Y)

update W -= 1.0 * K ( ( K + lambda ) * W - Y )

Z = Kw + B

Z = Kw + B


The documentation for this class was generated from the following file: