1 Introduction
    1.1 ALGLIB license
    1.2 Documentation license
    1.3 Reference Manual and User Guide
    1.4 Acknowledgements
2 Getting started with ALGLIB
    2.1 ALGLIB structure
        2.1.1 Packages
        2.1.2 Subpackages
    2.2 Compatibility
    2.3 Compiling ALGLIB
        2.3.1 Adding to your project
    2.4 Using ALGLIB
        2.4.1 ALGLIB classes
        2.4.2 Datatypes
        2.4.3 Functions
        2.4.4 Using functions: 'expert' and 'friendly' interfaces
    2.5 Advanced topics
        2.5.1 Testing ALGLIB
3 ALGLIB reference manual

1 Introduction

1.1 ALGLIB license

ALGLIB is a free software which uses dual licensing. You can either use it under GPL license (version 2 or at your option any later version) or buy commercial license without copyleft requirement. A copy of the GNU General Public License is available at http://www.fsf.org/licensing/licenses A copy of the commercial license can be found at http://www.alglib.net/commercial.php

1.2 Documentation license

This reference manual is licensed under BSD-like documentation license:

Copyright 1994-2009 Sergey Bochkanov, ALGLIB Project. All rights reserved.

Redistribution and use of this document (ALGLIB Reference Manual) with or without modification, are permitted provided that such redistributions will retain the above copyright notice, this condition and the following disclaimer as the first (or last) lines of this file.

THIS DOCUMENTATION IS PROVIDED BY THE ALGLIB PROJECT "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE ALGLIB PROJECT BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS DOCUMENTATION, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

1.3 Reference Manual and User Guide

ALGLIB Project provides two sources of information: ALGLIB Reference Manual (this document) and ALGLIB User Guide.

ALGLIB Reference Manual contains full description of all publicly accessible ALGLIB units accompanied with examples. Reference Manual is focused on the source code: it documents units, functions, structures and so on. If you want to know what unit YYY can do or what subroutines unit ZZZ contains Reference Manual is a place to go. Free software needs free documentation - that's why ALGLIB Reference Manual is licensed under BSD-like documentation license.

Additionally to the Reference Manual we provide you User Guide. User Guide is focused on more general questions: how fast ALGLIB is? how reliable it is? what are the strong and weak sides of the algorithms used? We aim to make ALGLIB User Guide an important source of information both about ALGLIB and numerical analysis algorithms in general. We want it to be a book about algorithms, not just software documentation. And we want it to be unique - that's why ALGLIB User Guide is distributed under less-permissive personal-use-only license.

1.4 Acknowledgements

ALGLIB was not possible without contribution of the next open source projects:

2 Getting started with ALGLIB

2.1 ALGLIB structure

2.1.1 Packages

ALGLIB for C# is a pure C# 2.0 library automatically generated by code generation tools developed within ALGLIB project. Pre-3.0 versions of ALGLIB included more than 100 units, but it was difficult to work with such large number of files. Since ALGLIB 3.0 all units are merged into 11 packages and two support units:

One package may rely on other ones, but we have tried to reduce number of dependencies. Every package relies on ap.cs and many packages rely on alglibinternal.cs. But many packages require only these two to work, and many other packages need significantly less than 13 packages. For example, statistics.cs requires two packages mentioned above and only one additional package - specialfunctions.cs.

2.1.2 Subpackages

There is one more concept to learn - subpackages. Every package was created from several source files. For example (as of ALGLIB 3.0.0), linalg.cs was created by merging together 14 .cs files. These files provide different functionality: one of them calculates triangular factorizations, another generates random matrices, and so on. We've merged source code, but what to do with their documentation?

Of course, we can merge their documentation (as we've merged units) in one big list of functions and data structures, but such list will be hard to read. Instead, we have decided to merge source code, but leave documentation separate.

If you look at the list of ALGLIB packages, you will see that each package includes several subpackages. For example, linalg.cs includes trfac, svd, evd and other subpackages. These subpackages do no exist as separate files, namespaces or other entities. They are just subsets of one large file, one large class, which provide significantly different functionality. They have separate documentation sections, but if you want to use svd subpackage, you have to compile linalg.cs, not svd.cs.

2.2 Compatibility

ALGLIB for C# is compatible with:

2.3 Compiling ALGLIB

2.3.1 Adding to your project

Adding ALGLIB to your project is easy - just pick packages you need and... add them to your project! It will work without any additional settings.

As you see, ALGLIB has no project files. Why? There are several reasons:

In any case, compiling ALGLIB is so simple that even without project file you can do it in several minutes.

2.4 Using ALGLIB

2.4.1 ALGLIB classes

All ALGLIB functionality is provided by one large class - alglib.

2.4.2 Datatypes

ALGLIB defines several "basic" datatypes (types which are used across many packages) and many package-specific datatypes. "Basic" datatypes are defined in ap.cs. Here is short list:

Package-specific datatypes are classes which can be divided into two distinct groups:

2.4.3 Functions

The most important "basic" functions from alglib class are:

2.4.4 Using functions: 'expert' and 'friendly' interfaces

Most ALGLIB functions provide two interfaces: 'expert' and 'friendly'. What is the difference between two? When you use 'friendly' interface, ALGLIB:

When you use 'expert' interface, ALGLIB:

Here are several examples of 'friendly' and 'expert' interfaces:

double[] x = new double[]{0,1,2,3};
double[] y = new double[]{1,5,3,9};
double[] y2 = new double[]{1,5,3,9,0};
alglib.spline1dinterpolant s;

alglib.spline1dbuildlinear(x, y, 4, out s);  // 'expert' interface is used
alglib.spline1dbuildlinear(x, y, out s);     // 'friendly' interface - input size is
                                             // automatically determined

alglib.spline1dbuildlinear(x, y2, 4, out s); // y2.length() is 5, but it will work

alglib.spline1dbuildlinear(x, y2, out s);    // it won't work because sizes of x and y2
                                             // are inconsistent

'Friendly' interface - matrix semantics:

double[,] a;
alglib.matinvreport  rep;
alglib.ae_int_t      info;

// 
// 'Friendly' interface: spdmatrixinverse() accepts and returns symmetric matrix
// 

// symmetric positive definite matrix
a = new double[,]{{2,1},{1,2}};

// after this line A will contain [[0.66,-0.33],[-0.33,0.66]]
// which is symmetric too
alglib.spdmatrixinverse(ref a, out info, out rep); 

// you may try to pass nonsymmetric matrix
a = new double[,]{{2,1},{0,2}};

// but exception will be thrown in such case
alglib.spdmatrixinverse(ref a, out info, out rep); 

Same function but with 'expert' interface:

double[,] a;
alglib.matinvreport  rep;
alglib.ae_int_t      info;

// 
// 'Expert' interface, spdmatrixinverse()
// 

// only upper triangle is used; a[1][0] is initialized by NAN,
// but it can be arbitrary number
a = new double[,]{{2,1},{Double.NaN,2}};

// after this line A will contain [[0.66,-0.33],[NAN,0.66]]
// only upper triangle is modified
alglib.spdmatrixinverse(ref a, 2 /* N */, true /* upper triangle is used */, out info, out rep); 

2.5 Advanced topics

2.5.1 Testing ALGLIB

There are two test suites in ALGLIB: computational tests and interface tests. Computational tests are located in /tests/test_c.cs. They are focused on numerical properties of algorithms, stress testing and "deep" tests (large automatically generated problems). They require significant amount of time to finish (tens of minutes).

Interface tests are located in /tests/test_i.cs. These tests are focused on ability to correctly pass data between computational core and caller, ability to detect simple problems in inputs, and on ability to at least compile ALGLIB with your compiler. They are very fast (about a minute to finish including compilation time).

Running test suite is easy - just

  1. compile one of these files (test_c.cs or test_i.cs) along with the rest of the library
  2. launch executable you will get. It may take from several seconds (interface tests) to several minutes (computational tests) to get final results

However, there is no strong reasons to launch tests before using ALGLIB. It was tested with many different compiler settings, and the very nature of .NET framework and its inherent portability allows us to say that ALGLIB for .NET will work on any system.

3 ALGLIB reference manual

Packages and subpackages

AlglibMisc package
hqrnd High quality random numbers generator
nearestneighbor Nearest neighbor search: approximate and exact
 
DataAnalysis package
bdss Basic dataset functions
dforest Decision forest classifier (regression model)
kmeans K-means++ clustering
lda Linear discriminant analysis
linreg Linear models
logit Logit models
mcpd 
mlpbase Basic neural network operations
mlpe Neural network ensemble models
mlptrain Neural network training
pca Principal component analysis
 
DiffEquations package
odesolver Ordinary differential equation solver
 
FastTransforms package
conv Fast real/complex convolution
corr Fast real/complex cross-correlation
fft Real/complex FFT
fht Real Fast Hartley Transform
 
Integration package
autogk Adaptive 1-dimensional integration
gkq Gauss-Kronrod quadrature generator
gq Gaussian quadrature generator
 
Interpolation package
idwint Inverse distance weighting: interpolation/fitting
lsfit Linear and nonlinear least-squares solvers
polint Polynomial interpolation/fitting
pspline Parametric spline interpolation
ratint Rational interpolation/fitting
spline1d 1D spline interpolation/fitting
spline2d 2D spline interpolation
 
LinAlg package
ablas Level 2 and Level 3 BLAS operations
bdsvd Bidiagonal SVD
evd Eigensolvers
inverseupdate Sherman-Morrison update of the inverse matrix
matdet Determinant calculation
matgen Random matrix generation
matinv Matrix inverse
ortfac Real/complex QR/LQ, bi(tri)diagonal, Hessenberg decompositions
rcond Condition number estimate
schur Schur decomposition
spdgevd Generalized symmetric eigensolver
svd Singular value decomposition
trfac LU and Cholesky decompositions
 
Optimization package
minbleic Bound constrained optimizer with additional linear equality/inequality constraints
mincg Conjugate gradient optimizer
mincomp Backward compatibility functions
minlbfgs Limited memory BFGS optimizer
minlm Improved Levenberg-Marquardt optimizer
minqp Quadratic programming with bound and linear equality/inequality constraints
 
Solvers package
densesolver Dense linear system solver
nleq Solvers for nonlinear equations
 
SpecialFunctions package
airyf Airy functions
bessel Bessel functions
betaf Beta function
binomialdistr Binomial distribution
chebyshev Chebyshev polynomials
chisquaredistr Chi-Square distribution
dawson Dawson integral
elliptic Elliptic integrals
expintegrals Exponential integrals
fdistr F-distribution
fresnel Fresnel integrals
gammafunc Gamma function
hermite Hermite polynomials
ibetaf Incomplete beta function
igammaf Incomplete gamma function
jacobianelliptic Jacobian elliptic functions
laguerre Laguerre polynomials
legendre Legendre polynomials
normaldistr Normal distribution
poissondistr Poisson distribution
psif Psi function
studenttdistr Student's t-distribution
trigintegrals Trigonometric integrals
 
Statistics package
basestat Mean, variance, covariance, correlation, etc.
correlationtests Hypothesis testing: correlation tests
jarquebera Hypothesis testing: Jarque-Bera test
mannwhitneyu Hypothesis testing: Mann-Whitney-U test
stest Hypothesis testing: sign test
studentttests Hypothesis testing: Student's t-test
variancetests Hypothesis testing: F-test and one-sample variance test
wsr Hypothesis testing: Wilcoxon signed rank test
 
cmatrixcopy
cmatrixgemm
cmatrixlefttrsm
cmatrixmv
cmatrixrank1
cmatrixrighttrsm
cmatrixsyrk
cmatrixtranspose
rmatrixcopy
rmatrixgemm
rmatrixlefttrsm
rmatrixmv
rmatrixrank1
rmatrixrighttrsm
rmatrixsyrk
rmatrixtranspose
/************************************************************************* Copy Input parameters: M - number of rows N - number of columns A - source matrix, MxN submatrix is copied and transposed IA - submatrix offset (row index) JA - submatrix offset (column index) B - destination matrix IB - submatrix offset (row index) JB - submatrix offset (column index) *************************************************************************/
public static void cmatrixcopy( int m, int n, complex[,] a, int ia, int ja, ref complex[,] b, int ib, int jb)
/************************************************************************* This subroutine calculates C = alpha*op1(A)*op2(B) +beta*C where: * C is MxN general matrix * op1(A) is MxK matrix * op2(B) is KxN matrix * "op" may be identity transformation, transposition, conjugate transposition Additional info: * cache-oblivious algorithm is used. * multiplication result replaces C. If Beta=0, C elements are not used in calculations (not multiplied by zero - just not referenced) * if Alpha=0, A is not used (not multiplied by zero - just not referenced) * if both Beta and Alpha are zero, C is filled by zeros. INPUT PARAMETERS N - matrix size, N>0 M - matrix size, N>0 K - matrix size, K>0 Alpha - coefficient A - matrix IA - submatrix offset JA - submatrix offset OpTypeA - transformation type: * 0 - no transformation * 1 - transposition * 2 - conjugate transposition B - matrix IB - submatrix offset JB - submatrix offset OpTypeB - transformation type: * 0 - no transformation * 1 - transposition * 2 - conjugate transposition Beta - coefficient C - matrix IC - submatrix offset JC - submatrix offset -- ALGLIB routine -- 16.12.2009 Bochkanov Sergey *************************************************************************/
public static void cmatrixgemm( int m, int n, int k, complex alpha, complex[,] a, int ia, int ja, int optypea, complex[,] b, int ib, int jb, int optypeb, complex beta, ref complex[,] c, int ic, int jc)
/************************************************************************* This subroutine calculates op(A^-1)*X where: * X is MxN general matrix * A is MxM upper/lower triangular/unitriangular matrix * "op" may be identity transformation, transposition, conjugate transposition Multiplication result replaces X. Cache-oblivious algorithm is used. INPUT PARAMETERS N - matrix size, N>=0 M - matrix size, N>=0 A - matrix, actial matrix is stored in A[I1:I1+M-1,J1:J1+M-1] I1 - submatrix offset J1 - submatrix offset IsUpper - whether matrix is upper triangular IsUnit - whether matrix is unitriangular OpType - transformation type: * 0 - no transformation * 1 - transposition * 2 - conjugate transposition C - matrix, actial matrix is stored in C[I2:I2+M-1,J2:J2+N-1] I2 - submatrix offset J2 - submatrix offset -- ALGLIB routine -- 15.12.2009 Bochkanov Sergey *************************************************************************/
public static void cmatrixlefttrsm( int m, int n, complex[,] a, int i1, int j1, bool isupper, bool isunit, int optype, ref complex[,] x, int i2, int j2)
/************************************************************************* Matrix-vector product: y := op(A)*x INPUT PARAMETERS: M - number of rows of op(A) M>=0 N - number of columns of op(A) N>=0 A - target matrix IA - submatrix offset (row index) JA - submatrix offset (column index) OpA - operation type: * OpA=0 => op(A) = A * OpA=1 => op(A) = A^T * OpA=2 => op(A) = A^H X - input vector IX - subvector offset IY - subvector offset OUTPUT PARAMETERS: Y - vector which stores result if M=0, then subroutine does nothing. if N=0, Y is filled by zeros. -- ALGLIB routine -- 28.01.2010 Bochkanov Sergey *************************************************************************/
public static void cmatrixmv( int m, int n, complex[,] a, int ia, int ja, int opa, complex[] x, int ix, ref complex[] y, int iy)
/************************************************************************* Rank-1 correction: A := A + u*v' INPUT PARAMETERS: M - number of rows N - number of columns A - target matrix, MxN submatrix is updated IA - submatrix offset (row index) JA - submatrix offset (column index) U - vector #1 IU - subvector offset V - vector #2 IV - subvector offset *************************************************************************/
public static void cmatrixrank1( int m, int n, ref complex[,] a, int ia, int ja, ref complex[] u, int iu, ref complex[] v, int iv)
/************************************************************************* This subroutine calculates X*op(A^-1) where: * X is MxN general matrix * A is NxN upper/lower triangular/unitriangular matrix * "op" may be identity transformation, transposition, conjugate transposition Multiplication result replaces X. Cache-oblivious algorithm is used. INPUT PARAMETERS N - matrix size, N>=0 M - matrix size, N>=0 A - matrix, actial matrix is stored in A[I1:I1+N-1,J1:J1+N-1] I1 - submatrix offset J1 - submatrix offset IsUpper - whether matrix is upper triangular IsUnit - whether matrix is unitriangular OpType - transformation type: * 0 - no transformation * 1 - transposition * 2 - conjugate transposition C - matrix, actial matrix is stored in C[I2:I2+M-1,J2:J2+N-1] I2 - submatrix offset J2 - submatrix offset -- ALGLIB routine -- 15.12.2009 Bochkanov Sergey *************************************************************************/
public static void cmatrixrighttrsm( int m, int n, complex[,] a, int i1, int j1, bool isupper, bool isunit, int optype, ref complex[,] x, int i2, int j2)
/************************************************************************* This subroutine calculates C=alpha*A*A^H+beta*C or C=alpha*A^H*A+beta*C where: * C is NxN Hermitian matrix given by its upper/lower triangle * A is NxK matrix when A*A^H is calculated, KxN matrix otherwise Additional info: * cache-oblivious algorithm is used. * multiplication result replaces C. If Beta=0, C elements are not used in calculations (not multiplied by zero - just not referenced) * if Alpha=0, A is not used (not multiplied by zero - just not referenced) * if both Beta and Alpha are zero, C is filled by zeros. INPUT PARAMETERS N - matrix size, N>=0 K - matrix size, K>=0 Alpha - coefficient A - matrix IA - submatrix offset JA - submatrix offset OpTypeA - multiplication type: * 0 - A*A^H is calculated * 2 - A^H*A is calculated Beta - coefficient C - matrix IC - submatrix offset JC - submatrix offset IsUpper - whether C is upper triangular or lower triangular -- ALGLIB routine -- 16.12.2009 Bochkanov Sergey *************************************************************************/
public static void cmatrixsyrk( int n, int k, double alpha, complex[,] a, int ia, int ja, int optypea, double beta, ref complex[,] c, int ic, int jc, bool isupper)
/************************************************************************* Cache-oblivous complex "copy-and-transpose" Input parameters: M - number of rows N - number of columns A - source matrix, MxN submatrix is copied and transposed IA - submatrix offset (row index) JA - submatrix offset (column index) A - destination matrix IB - submatrix offset (row index) JB - submatrix offset (column index) *************************************************************************/
public static void cmatrixtranspose( int m, int n, complex[,] a, int ia, int ja, ref complex[,] b, int ib, int jb)
/************************************************************************* Copy Input parameters: M - number of rows N - number of columns A - source matrix, MxN submatrix is copied and transposed IA - submatrix offset (row index) JA - submatrix offset (column index) B - destination matrix IB - submatrix offset (row index) JB - submatrix offset (column index) *************************************************************************/
public static void rmatrixcopy( int m, int n, double[,] a, int ia, int ja, ref double[,] b, int ib, int jb)
/************************************************************************* Same as CMatrixGEMM, but for real numbers. OpType may be only 0 or 1. -- ALGLIB routine -- 16.12.2009 Bochkanov Sergey *************************************************************************/
public static void rmatrixgemm( int m, int n, int k, double alpha, double[,] a, int ia, int ja, int optypea, double[,] b, int ib, int jb, int optypeb, double beta, ref double[,] c, int ic, int jc)
/************************************************************************* Same as CMatrixLeftTRSM, but for real matrices OpType may be only 0 or 1. -- ALGLIB routine -- 15.12.2009 Bochkanov Sergey *************************************************************************/
public static void rmatrixlefttrsm( int m, int n, double[,] a, int i1, int j1, bool isupper, bool isunit, int optype, ref double[,] x, int i2, int j2)
/************************************************************************* Matrix-vector product: y := op(A)*x INPUT PARAMETERS: M - number of rows of op(A) N - number of columns of op(A) A - target matrix IA - submatrix offset (row index) JA - submatrix offset (column index) OpA - operation type: * OpA=0 => op(A) = A * OpA=1 => op(A) = A^T X - input vector IX - subvector offset IY - subvector offset OUTPUT PARAMETERS: Y - vector which stores result if M=0, then subroutine does nothing. if N=0, Y is filled by zeros. -- ALGLIB routine -- 28.01.2010 Bochkanov Sergey *************************************************************************/
public static void rmatrixmv( int m, int n, double[,] a, int ia, int ja, int opa, double[] x, int ix, ref double[] y, int iy)
/************************************************************************* Rank-1 correction: A := A + u*v' INPUT PARAMETERS: M - number of rows N - number of columns A - target matrix, MxN submatrix is updated IA - submatrix offset (row index) JA - submatrix offset (column index) U - vector #1 IU - subvector offset V - vector #2 IV - subvector offset *************************************************************************/
public static void rmatrixrank1( int m, int n, ref double[,] a, int ia, int ja, ref double[] u, int iu, ref double[] v, int iv)
/************************************************************************* Same as CMatrixRightTRSM, but for real matrices OpType may be only 0 or 1. -- ALGLIB routine -- 15.12.2009 Bochkanov Sergey *************************************************************************/
public static void rmatrixrighttrsm( int m, int n, double[,] a, int i1, int j1, bool isupper, bool isunit, int optype, ref double[,] x, int i2, int j2)
/************************************************************************* Same as CMatrixSYRK, but for real matrices OpType may be only 0 or 1. -- ALGLIB routine -- 16.12.2009 Bochkanov Sergey *************************************************************************/
public static void rmatrixsyrk( int n, int k, double alpha, double[,] a, int ia, int ja, int optypea, double beta, ref double[,] c, int ic, int jc, bool isupper)
/************************************************************************* Cache-oblivous real "copy-and-transpose" Input parameters: M - number of rows N - number of columns A - source matrix, MxN submatrix is copied and transposed IA - submatrix offset (row index) JA - submatrix offset (column index) A - destination matrix IB - submatrix offset (row index) JB - submatrix offset (column index) *************************************************************************/
public static void rmatrixtranspose( int m, int n, double[,] a, int ia, int ja, ref double[,] b, int ib, int jb)
airy
/************************************************************************* Airy function Solution of the differential equation y"(x) = xy. The function returns the two independent solutions Ai, Bi and their first derivatives Ai'(x), Bi'(x). Evaluation is by power series summation for small x, by rational minimax approximations for large x. ACCURACY: Error criterion is absolute when function <= 1, relative when function > 1, except * denotes relative error criterion. For large negative x, the absolute error increases as x^1.5. For large positive x, the relative error increases as x^1.5. Arithmetic domain function # trials peak rms IEEE -10, 0 Ai 10000 1.6e-15 2.7e-16 IEEE 0, 10 Ai 10000 2.3e-14* 1.8e-15* IEEE -10, 0 Ai' 10000 4.6e-15 7.6e-16 IEEE 0, 10 Ai' 10000 1.8e-14* 1.5e-15* IEEE -10, 10 Bi 30000 4.2e-15 5.3e-16 IEEE -10, 10 Bi' 30000 4.9e-15 7.3e-16 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1989, 2000 by Stephen L. Moshier *************************************************************************/
public static void airy( double x, out double ai, out double aip, out double bi, out double bip)
autogkreport
autogkstate
autogkintegrate
autogkresults
autogksingular
autogksmooth
autogksmoothw
autogk_d1 Integrating f=exp(x) by adaptive integrator
/************************************************************************* Integration report: * TerminationType = completetion code: * -5 non-convergence of Gauss-Kronrod nodes calculation subroutine. * -1 incorrect parameters were specified * 1 OK * Rep.NFEV countains number of function calculations * Rep.NIntervals contains number of intervals [a,b] was partitioned into. *************************************************************************/
public class autogkreport { public int terminationtype; public int nfev; public int nintervals; }
/************************************************************************* This structure stores state of the integration algorithm. Although this class has public fields, they are not intended for external use. You should use ALGLIB functions to work with this class: * autogksmooth()/AutoGKSmoothW()/... to create objects * autogkintegrate() to begin integration * autogkresults() to get results *************************************************************************/
public class autogkstate { }
/************************************************************************* This function is used to launcn iterations of ODE solver It accepts following parameters: diff - callback which calculates dy/dx for given y and x obj - optional object which is passed to diff; can be NULL -- ALGLIB -- Copyright 07.05.2009 by Bochkanov Sergey *************************************************************************/
public static void autogkintegrate(autogkstate state, integrator1_func func, object obj)

Examples:   [1]  

/************************************************************************* Adaptive integration results Called after AutoGKIteration returned False. Input parameters: State - algorithm state (used by AutoGKIteration). Output parameters: V - integral(f(x)dx,a,b) Rep - optimization report (see AutoGKReport description) -- ALGLIB -- Copyright 14.11.2007 by Bochkanov Sergey *************************************************************************/
public static void autogkresults( autogkstate state, out double v, out autogkreport rep)

Examples:   [1]  

/************************************************************************* Integration on a finite interval [A,B]. Integrand have integrable singularities at A/B. F(X) must diverge as "(x-A)^alpha" at A, as "(B-x)^beta" at B, with known alpha/beta (alpha>-1, beta>-1). If alpha/beta are not known, estimates from below can be used (but these estimates should be greater than -1 too). One of alpha/beta variables (or even both alpha/beta) may be equal to 0, which means than function F(x) is non-singular at A/B. Anyway (singular at bounds or not), function F(x) is supposed to be continuous on (A,B). Fast-convergent algorithm based on a Gauss-Kronrod formula is used. Result is calculated with accuracy close to the machine precision. INPUT PARAMETERS: A, B - interval boundaries (A<B, A=B or A>B) Alpha - power-law coefficient of the F(x) at A, Alpha>-1 Beta - power-law coefficient of the F(x) at B, Beta>-1 OUTPUT PARAMETERS State - structure which stores algorithm state SEE ALSO AutoGKSmooth, AutoGKSmoothW, AutoGKResults. -- ALGLIB -- Copyright 06.05.2009 by Bochkanov Sergey *************************************************************************/
public static void autogksingular( double a, double b, double alpha, double beta, out autogkstate state)
/************************************************************************* Integration of a smooth function F(x) on a finite interval [a,b]. Fast-convergent algorithm based on a Gauss-Kronrod formula is used. Result is calculated with accuracy close to the machine precision. Algorithm works well only with smooth integrands. It may be used with continuous non-smooth integrands, but with less performance. It should never be used with integrands which have integrable singularities at lower or upper limits - algorithm may crash. Use AutoGKSingular in such cases. INPUT PARAMETERS: A, B - interval boundaries (A<B, A=B or A>B) OUTPUT PARAMETERS State - structure which stores algorithm state SEE ALSO AutoGKSmoothW, AutoGKSingular, AutoGKResults. -- ALGLIB -- Copyright 06.05.2009 by Bochkanov Sergey *************************************************************************/
public static void autogksmooth(double a, double b, out autogkstate state)

Examples:   [1]  

/************************************************************************* Integration of a smooth function F(x) on a finite interval [a,b]. This subroutine is same as AutoGKSmooth(), but it guarantees that interval [a,b] is partitioned into subintervals which have width at most XWidth. Subroutine can be used when integrating nearly-constant function with narrow "bumps" (about XWidth wide). If "bumps" are too narrow, AutoGKSmooth subroutine can overlook them. INPUT PARAMETERS: A, B - interval boundaries (A<B, A=B or A>B) OUTPUT PARAMETERS State - structure which stores algorithm state SEE ALSO AutoGKSmooth, AutoGKSingular, AutoGKResults. -- ALGLIB -- Copyright 06.05.2009 by Bochkanov Sergey *************************************************************************/
public static void autogksmoothw( double a, double b, double xwidth, out autogkstate state)
public static void int_function_1_func(double x, double xminusa, double bminusx, ref double y, object obj)
{
    // this callback calculates f(x)=exp(x)
    y = Math.Exp(x);
}
public static int Main(string[] args)
{
    //
    // This example demonstrates integration of f=exp(x) on [0,1]:
    // * first, autogkstate is initialized
    // * then we call integration function
    // * and finally we obtain results with autogkresults() call
    //
    double a = 0;
    double b = 1;
    alglib.autogkstate s;
    double v;
    alglib.autogkreport rep;

    alglib.autogksmooth(a, b, out s);
    alglib.autogkintegrate(s, int_function_1_func, null);
    alglib.autogkresults(s, out v, out rep);

    System.Console.WriteLine("{0:F2", v); // EXPECTED: 1.7182
    System.Console.ReadLine();
    return 0;
}


cov2
covm
covm2
pearsoncorr2
pearsoncorrelation
pearsoncorrm
pearsoncorrm2
sampleadev
samplemedian
samplemoments
samplepercentile
spearmancorr2
spearmancorrm
spearmancorrm2
spearmanrankcorrelation
basestat_d_base Basic functionality (moments, adev, median, percentile)
basestat_d_c2 Correlation (covariance) between two random variables
basestat_d_cm Correlation (covariance) between components of random vector
basestat_d_cm2 Correlation (covariance) between two random vectors
/************************************************************************* 2-sample covariance Input parameters: X - sample 1 (array indexes: [0..N-1]) Y - sample 2 (array indexes: [0..N-1]) N - N>=0, sample size: * if given, only N leading elements of X/Y are processed * if not given, automatically determined from input sizes Result: covariance (zero for N=0 or N=1) -- ALGLIB -- Copyright 28.10.2010 by Bochkanov Sergey *************************************************************************/
public static double cov2(double[] x, double[] y) public static double cov2(double[] x, double[] y, int n)

Examples:   [1]  

/************************************************************************* Covariance matrix INPUT PARAMETERS: X - array[N,M], sample matrix: * J-th column corresponds to J-th variable * I-th row corresponds to I-th observation N - N>=0, number of observations: * if given, only leading N rows of X are used * if not given, automatically determined from input size M - M>0, number of variables: * if given, only leading M columns of X are used * if not given, automatically determined from input size OUTPUT PARAMETERS: C - array[M,M], covariance matrix (zero if N=0 or N=1) -- ALGLIB -- Copyright 28.10.2010 by Bochkanov Sergey *************************************************************************/
public static void covm(double[,] x, out double[,] c) public static void covm(double[,] x, int n, int m, out double[,] c)

Examples:   [1]  

/************************************************************************* Cross-covariance matrix INPUT PARAMETERS: X - array[N,M1], sample matrix: * J-th column corresponds to J-th variable * I-th row corresponds to I-th observation Y - array[N,M2], sample matrix: * J-th column corresponds to J-th variable * I-th row corresponds to I-th observation N - N>=0, number of observations: * if given, only leading N rows of X/Y are used * if not given, automatically determined from input sizes M1 - M1>0, number of variables in X: * if given, only leading M1 columns of X are used * if not given, automatically determined from input size M2 - M2>0, number of variables in Y: * if given, only leading M1 columns of X are used * if not given, automatically determined from input size OUTPUT PARAMETERS: C - array[M1,M2], cross-covariance matrix (zero if N=0 or N=1) -- ALGLIB -- Copyright 28.10.2010 by Bochkanov Sergey *************************************************************************/
public static void covm2(double[,] x, double[,] y, out double[,] c) public static void covm2( double[,] x, double[,] y, int n, int m1, int m2, out double[,] c)

Examples:   [1]  

/************************************************************************* Pearson product-moment correlation coefficient Input parameters: X - sample 1 (array indexes: [0..N-1]) Y - sample 2 (array indexes: [0..N-1]) N - N>=0, sample size: * if given, only N leading elements of X/Y are processed * if not given, automatically determined from input sizes Result: Pearson product-moment correlation coefficient (zero for N=0 or N=1) -- ALGLIB -- Copyright 28.10.2010 by Bochkanov Sergey *************************************************************************/
public static double pearsoncorr2(double[] x, double[] y) public static double pearsoncorr2(double[] x, double[] y, int n)

Examples:   [1]  

/************************************************************************* Obsolete function, we recommend to use PearsonCorr2(). -- ALGLIB -- Copyright 09.04.2007 by Bochkanov Sergey *************************************************************************/
public static double pearsoncorrelation(double[] x, double[] y, int n)
/************************************************************************* Pearson product-moment correlation matrix INPUT PARAMETERS: X - array[N,M], sample matrix: * J-th column corresponds to J-th variable * I-th row corresponds to I-th observation N - N>=0, number of observations: * if given, only leading N rows of X are used * if not given, automatically determined from input size M - M>0, number of variables: * if given, only leading M columns of X are used * if not given, automatically determined from input size OUTPUT PARAMETERS: C - array[M,M], correlation matrix (zero if N=0 or N=1) -- ALGLIB -- Copyright 28.10.2010 by Bochkanov Sergey *************************************************************************/
public static void pearsoncorrm(double[,] x, out double[,] c) public static void pearsoncorrm( double[,] x, int n, int m, out double[,] c)

Examples:   [1]  

/************************************************************************* Pearson product-moment cross-correlation matrix INPUT PARAMETERS: X - array[N,M1], sample matrix: * J-th column corresponds to J-th variable * I-th row corresponds to I-th observation Y - array[N,M2], sample matrix: * J-th column corresponds to J-th variable * I-th row corresponds to I-th observation N - N>=0, number of observations: * if given, only leading N rows of X/Y are used * if not given, automatically determined from input sizes M1 - M1>0, number of variables in X: * if given, only leading M1 columns of X are used * if not given, automatically determined from input size M2 - M2>0, number of variables in Y: * if given, only leading M1 columns of X are used * if not given, automatically determined from input size OUTPUT PARAMETERS: C - array[M1,M2], cross-correlation matrix (zero if N=0 or N=1) -- ALGLIB -- Copyright 28.10.2010 by Bochkanov Sergey *************************************************************************/
public static void pearsoncorrm2( double[,] x, double[,] y, out double[,] c) public static void pearsoncorrm2( double[,] x, double[,] y, int n, int m1, int m2, out double[,] c)

Examples:   [1]  

/************************************************************************* ADev Input parameters: X - sample N - N>=0, sample size: * if given, only leading N elements of X are processed * if not given, automatically determined from size of X Output parameters: ADev- ADev -- ALGLIB -- Copyright 06.09.2006 by Bochkanov Sergey *************************************************************************/
public static void sampleadev(double[] x, out double adev) public static void sampleadev(double[] x, int n, out double adev)

Examples:   [1]  

/************************************************************************* Median calculation. Input parameters: X - sample (array indexes: [0..N-1]) N - N>=0, sample size: * if given, only leading N elements of X are processed * if not given, automatically determined from size of X Output parameters: Median -- ALGLIB -- Copyright 06.09.2006 by Bochkanov Sergey *************************************************************************/
public static void samplemedian(double[] x, out double median) public static void samplemedian(double[] x, int n, out double median)

Examples:   [1]  

/************************************************************************* Calculation of the distribution moments: mean, variance, skewness, kurtosis. INPUT PARAMETERS: X - sample N - N>=0, sample size: * if given, only leading N elements of X are processed * if not given, automatically determined from size of X OUTPUT PARAMETERS Mean - mean. Variance- variance. Skewness- skewness (if variance<>0; zero otherwise). Kurtosis- kurtosis (if variance<>0; zero otherwise). -- ALGLIB -- Copyright 06.09.2006 by Bochkanov Sergey *************************************************************************/
public static void samplemoments( double[] x, out double mean, out double variance, out double skewness, out double kurtosis) public static void samplemoments( double[] x, int n, out double mean, out double variance, out double skewness, out double kurtosis)

Examples:   [1]  

/************************************************************************* Percentile calculation. Input parameters: X - sample (array indexes: [0..N-1]) N - N>=0, sample size: * if given, only leading N elements of X are processed * if not given, automatically determined from size of X P - percentile (0<=P<=1) Output parameters: V - percentile -- ALGLIB -- Copyright 01.03.2008 by Bochkanov Sergey *************************************************************************/
public static void samplepercentile(double[] x, double p, out double v) public static void samplepercentile( double[] x, int n, double p, out double v)

Examples:   [1]  

/************************************************************************* Spearman's rank correlation coefficient Input parameters: X - sample 1 (array indexes: [0..N-1]) Y - sample 2 (array indexes: [0..N-1]) N - N>=0, sample size: * if given, only N leading elements of X/Y are processed * if not given, automatically determined from input sizes Result: Spearman's rank correlation coefficient (zero for N=0 or N=1) -- ALGLIB -- Copyright 09.04.2007 by Bochkanov Sergey *************************************************************************/
public static double spearmancorr2(double[] x, double[] y) public static double spearmancorr2(double[] x, double[] y, int n)

Examples:   [1]  

/************************************************************************* Spearman's rank correlation matrix INPUT PARAMETERS: X - array[N,M], sample matrix: * J-th column corresponds to J-th variable * I-th row corresponds to I-th observation N - N>=0, number of observations: * if given, only leading N rows of X are used * if not given, automatically determined from input size M - M>0, number of variables: * if given, only leading M columns of X are used * if not given, automatically determined from input size OUTPUT PARAMETERS: C - array[M,M], correlation matrix (zero if N=0 or N=1) -- ALGLIB -- Copyright 28.10.2010 by Bochkanov Sergey *************************************************************************/
public static void spearmancorrm(double[,] x, out double[,] c) public static void spearmancorrm( double[,] x, int n, int m, out double[,] c)

Examples:   [1]  

/************************************************************************* Spearman's rank cross-correlation matrix INPUT PARAMETERS: X - array[N,M1], sample matrix: * J-th column corresponds to J-th variable * I-th row corresponds to I-th observation Y - array[N,M2], sample matrix: * J-th column corresponds to J-th variable * I-th row corresponds to I-th observation N - N>=0, number of observations: * if given, only leading N rows of X/Y are used * if not given, automatically determined from input sizes M1 - M1>0, number of variables in X: * if given, only leading M1 columns of X are used * if not given, automatically determined from input size M2 - M2>0, number of variables in Y: * if given, only leading M1 columns of X are used * if not given, automatically determined from input size OUTPUT PARAMETERS: C - array[M1,M2], cross-correlation matrix (zero if N=0 or N=1) -- ALGLIB -- Copyright 28.10.2010 by Bochkanov Sergey *************************************************************************/
public static void spearmancorrm2( double[,] x, double[,] y, out double[,] c) public static void spearmancorrm2( double[,] x, double[,] y, int n, int m1, int m2, out double[,] c)

Examples:   [1]  

/************************************************************************* Obsolete function, we recommend to use SpearmanCorr2(). -- ALGLIB -- Copyright 09.04.2007 by Bochkanov Sergey *************************************************************************/
public static double spearmanrankcorrelation( double[] x, double[] y, int n)

public static int Main(string[] args)
{
    double[] x = new double[]{0,1,4,9,16,25,36,49,64,81};
    double mean;
    double variance;
    double skewness;
    double kurtosis;
    double adev;
    double p;
    double v;

    //
    // Here we demonstrate calculation of sample moments
    // (mean, variance, skewness, kurtosis)
    //
    alglib.samplemoments(x, out mean, out variance, out skewness, out kurtosis);
    System.Console.WriteLine("{0:F1", mean); // EXPECTED: 28.5
    System.Console.WriteLine("{0:F1", variance); // EXPECTED: 801.1667
    System.Console.WriteLine("{0:F1", skewness); // EXPECTED: 0.5751
    System.Console.WriteLine("{0:F1", kurtosis); // EXPECTED: -1.2666

    //
    // Average deviation
    //
    alglib.sampleadev(x, out adev);
    System.Console.WriteLine("{0:F1", adev); // EXPECTED: 23.2

    //
    // Median and percentile
    //
    alglib.samplemedian(x, out v);
    System.Console.WriteLine("{0:F1", v); // EXPECTED: 20.5
    p = 0.5;
    alglib.samplepercentile(x, p, out v);
    System.Console.WriteLine("{0:F1", v); // EXPECTED: 20.5
    System.Console.ReadLine();
    return 0;
}



public static int Main(string[] args)
{
    //
    // We have two samples - x and y, and want to measure dependency between them
    //
    double[] x = new double[]{0,1,4,9,16,25,36,49,64,81};
    double[] y = new double[]{0,1,2,3,4,5,6,7,8,9};
    double v;

    //
    // Three dependency measures are calculated:
    // * covariation
    // * Pearson correlation
    // * Spearman rank correlation
    //
    v = alglib.cov2(x, y);
    System.Console.WriteLine("{0:F2", v); // EXPECTED: 82.5
    v = alglib.pearsoncorr2(x, y);
    System.Console.WriteLine("{0:F2", v); // EXPECTED: 0.9627
    v = alglib.spearmancorr2(x, y);
    System.Console.WriteLine("{0:F2", v); // EXPECTED: 1.000
    System.Console.ReadLine();
    return 0;
}



public static int Main(string[] args)
{
    //
    // X is a sample matrix:
    // * I-th row corresponds to I-th observation
    // * J-th column corresponds to J-th variable
    //
    double[,] x = new double[,]{{1,0,1},{1,1,0},{-1,1,0},{-2,-1,1},{-1,0,9}};
    double[,] c;

    //
    // Three dependency measures are calculated:
    // * covariation
    // * Pearson correlation
    // * Spearman rank correlation
    //
    // Result is stored into C, with C[i,j] equal to correlation
    // (covariance) between I-th and J-th variables of X.
    //
    alglib.covm(x, out c);
    System.Console.WriteLine("{0}", alglib.ap.format(c,1)); // EXPECTED: [[1.80,0.60,-1.40],[0.60,0.70,-0.80],[-1.40,-0.80,14.70]]
    alglib.pearsoncorrm(x, out c);
    System.Console.WriteLine("{0}", alglib.ap.format(c,1)); // EXPECTED: [[1.000,0.535,-0.272],[0.535,1.000,-0.249],[-0.272,-0.249,1.000]]
    alglib.spearmancorrm(x, out c);
    System.Console.WriteLine("{0}", alglib.ap.format(c,1)); // EXPECTED: [[1.000,0.556,-0.306],[0.556,1.000,-0.750],[-0.306,-0.750,1.000]]
    System.Console.ReadLine();
    return 0;
}



public static int Main(string[] args)
{
    //
    // X and Y are sample matrices:
    // * I-th row corresponds to I-th observation
    // * J-th column corresponds to J-th variable
    //
    double[,] x = new double[,]{{1,0,1},{1,1,0},{-1,1,0},{-2,-1,1},{-1,0,9}};
    double[,] y = new double[,]{{2,3},{2,1},{-1,6},{-9,9},{7,1}};
    double[,] c;

    //
    // Three dependency measures are calculated:
    // * covariation
    // * Pearson correlation
    // * Spearman rank correlation
    //
    // Result is stored into C, with C[i,j] equal to correlation
    // (covariance) between I-th variable of X and J-th variable of Y.
    //
    alglib.covm2(x, y, out c);
    System.Console.WriteLine("{0}", alglib.ap.format(c,1)); // EXPECTED: [[4.100,-3.250],[2.450,-1.500],[13.450,-5.750]]
    alglib.pearsoncorrm2(x, y, out c);
    System.Console.WriteLine("{0}", alglib.ap.format(c,1)); // EXPECTED: [[0.519,-0.699],[0.497,-0.518],[0.596,-0.433]]
    alglib.spearmancorrm2(x, y, out c);
    System.Console.WriteLine("{0}", alglib.ap.format(c,1)); // EXPECTED: [[0.541,-0.649],[0.216,-0.433],[0.433,-0.135]]
    System.Console.ReadLine();
    return 0;
}


dsoptimalsplit2
dsoptimalsplit2fast
/************************************************************************* Optimal binary classification Algorithms finds optimal (=with minimal cross-entropy) binary partition. Internal subroutine. INPUT PARAMETERS: A - array[0..N-1], variable C - array[0..N-1], class numbers (0 or 1). N - array size OUTPUT PARAMETERS: Info - completetion code: * -3, all values of A[] are same (partition is impossible) * -2, one of C[] is incorrect (<0, >1) * -1, incorrect pararemets were passed (N<=0). * 1, OK Threshold- partiton boundary. Left part contains values which are strictly less than Threshold. Right part contains values which are greater than or equal to Threshold. PAL, PBL- probabilities P(0|v<Threshold) and P(1|v<Threshold) PAR, PBR- probabilities P(0|v>=Threshold) and P(1|v>=Threshold) CVE - cross-validation estimate of cross-entropy -- ALGLIB -- Copyright 22.05.2008 by Bochkanov Sergey *************************************************************************/
public static void dsoptimalsplit2( double[] a, int[] c, int n, out int info, out double threshold, out double pal, out double pbl, out double par, out double pbr, out double cve)
/************************************************************************* Optimal partition, internal subroutine. Fast version. Accepts: A array[0..N-1] array of attributes array[0..N-1] C array[0..N-1] array of class labels TiesBuf array[0..N] temporaries (ties) CntBuf array[0..2*NC-1] temporaries (counts) Alpha centering factor (0<=alpha<=1, recommended value - 0.05) BufR array[0..N-1] temporaries BufI array[0..N-1] temporaries Output: Info error code (">0"=OK, "<0"=bad) RMS training set RMS error CVRMS leave-one-out RMS error Note: content of all arrays is changed by subroutine; it doesn't allocate temporaries. -- ALGLIB -- Copyright 11.12.2008 by Bochkanov Sergey *************************************************************************/
public static void dsoptimalsplit2fast( ref double[] a, ref int[] c, ref int[] tiesbuf, ref int[] cntbuf, ref double[] bufr, ref int[] bufi, int n, int nc, double alpha, out int info, out double threshold, out double rms, out double cvrms)
rmatrixbdsvd
/************************************************************************* Singular value decomposition of a bidiagonal matrix (extended algorithm) The algorithm performs the singular value decomposition of a bidiagonal matrix B (upper or lower) representing it as B = Q*S*P^T, where Q and P - orthogonal matrices, S - diagonal matrix with non-negative elements on the main diagonal, in descending order. The algorithm finds singular values. In addition, the algorithm can calculate matrices Q and P (more precisely, not the matrices, but their product with given matrices U and VT - U*Q and (P^T)*VT)). Of course, matrices U and VT can be of any type, including identity. Furthermore, the algorithm can calculate Q'*C (this product is calculated more effectively than U*Q, because this calculation operates with rows instead of matrix columns). The feature of the algorithm is its ability to find all singular values including those which are arbitrarily close to 0 with relative accuracy close to machine precision. If the parameter IsFractionalAccuracyRequired is set to True, all singular values will have high relative accuracy close to machine precision. If the parameter is set to False, only the biggest singular value will have relative accuracy close to machine precision. The absolute error of other singular values is equal to the absolute error of the biggest singular value. Input parameters: D - main diagonal of matrix B. Array whose index ranges within [0..N-1]. E - superdiagonal (or subdiagonal) of matrix B. Array whose index ranges within [0..N-2]. N - size of matrix B. IsUpper - True, if the matrix is upper bidiagonal. IsFractionalAccuracyRequired - accuracy to search singular values with. U - matrix to be multiplied by Q. Array whose indexes range within [0..NRU-1, 0..N-1]. The matrix can be bigger, in that case only the submatrix [0..NRU-1, 0..N-1] will be multiplied by Q. NRU - number of rows in matrix U. C - matrix to be multiplied by Q'. Array whose indexes range within [0..N-1, 0..NCC-1]. The matrix can be bigger, in that case only the submatrix [0..N-1, 0..NCC-1] will be multiplied by Q'. NCC - number of columns in matrix C. VT - matrix to be multiplied by P^T. Array whose indexes range within [0..N-1, 0..NCVT-1]. The matrix can be bigger, in that case only the submatrix [0..N-1, 0..NCVT-1] will be multiplied by P^T. NCVT - number of columns in matrix VT. Output parameters: D - singular values of matrix B in descending order. U - if NRU>0, contains matrix U*Q. VT - if NCVT>0, contains matrix (P^T)*VT. C - if NCC>0, contains matrix Q'*C. Result: True, if the algorithm has converged. False, if the algorithm hasn't converged (rare case). Additional information: The type of convergence is controlled by the internal parameter TOL. If the parameter is greater than 0, the singular values will have relative accuracy TOL. If TOL<0, the singular values will have absolute accuracy ABS(TOL)*norm(B). By default, |TOL| falls within the range of 10*Epsilon and 100*Epsilon, where Epsilon is the machine precision. It is not recommended to use TOL less than 10*Epsilon since this will considerably slow down the algorithm and may not lead to error decreasing. History: * 31 March, 2007. changed MAXITR from 6 to 12. -- LAPACK routine (version 3.0) -- Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., Courant Institute, Argonne National Lab, and Rice University October 31, 1999. *************************************************************************/
public static bool rmatrixbdsvd( ref double[] d, double[] e, int n, bool isupper, bool isfractionalaccuracyrequired, ref double[,] u, int nru, ref double[,] c, int ncc, ref double[,] vt, int ncvt)
besseli0
besseli1
besselj0
besselj1
besseljn
besselk0
besselk1
besselkn
bessely0
bessely1
besselyn
/************************************************************************* Modified Bessel function of order zero Returns modified Bessel function of order zero of the argument. The function is defined as i0(x) = j0( ix ). The range is partitioned into the two intervals [0,8] and (8, infinity). Chebyshev polynomial expansions are employed in each interval. ACCURACY: Relative error: arithmetic domain # trials peak rms IEEE 0,30 30000 5.8e-16 1.4e-16 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 2000 by Stephen L. Moshier *************************************************************************/
public static double besseli0(double x)
/************************************************************************* Modified Bessel function of order one Returns modified Bessel function of order one of the argument. The function is defined as i1(x) = -i j1( ix ). The range is partitioned into the two intervals [0,8] and (8, infinity). Chebyshev polynomial expansions are employed in each interval. ACCURACY: Relative error: arithmetic domain # trials peak rms IEEE 0, 30 30000 1.9e-15 2.1e-16 Cephes Math Library Release 2.8: June, 2000 Copyright 1985, 1987, 2000 by Stephen L. Moshier *************************************************************************/
public static double besseli1(double x)
/************************************************************************* Bessel function of order zero Returns Bessel function of order zero of the argument. The domain is divided into the intervals [0, 5] and (5, infinity). In the first interval the following rational approximation is used: 2 2 (w - r ) (w - r ) P (w) / Q (w) 1 2 3 8 2 where w = x and the two r's are zeros of the function. In the second interval, the Hankel asymptotic expansion is employed with two rational functions of degree 6/6 and 7/7. ACCURACY: Absolute error: arithmetic domain # trials peak rms IEEE 0, 30 60000 4.2e-16 1.1e-16 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1989, 2000 by Stephen L. Moshier *************************************************************************/
public static double besselj0(double x)
/************************************************************************* Bessel function of order one Returns Bessel function of order one of the argument. The domain is divided into the intervals [0, 8] and (8, infinity). In the first interval a 24 term Chebyshev expansion is used. In the second, the asymptotic trigonometric representation is employed using two rational functions of degree 5/5. ACCURACY: Absolute error: arithmetic domain # trials peak rms IEEE 0, 30 30000 2.6e-16 1.1e-16 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1989, 2000 by Stephen L. Moshier *************************************************************************/
public static double besselj1(double x)
/************************************************************************* Bessel function of integer order Returns Bessel function of order n, where n is a (possibly negative) integer. The ratio of jn(x) to j0(x) is computed by backward recurrence. First the ratio jn/jn-1 is found by a continued fraction expansion. Then the recurrence relating successive orders is applied until j0 or j1 is reached. If n = 0 or 1 the routine for j0 or j1 is called directly. ACCURACY: Absolute error: arithmetic range # trials peak rms IEEE 0, 30 5000 4.4e-16 7.9e-17 Not suitable for large n or x. Use jv() (fractional order) instead. Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 2000 by Stephen L. Moshier *************************************************************************/
public static double besseljn(int n, double x)
/************************************************************************* Modified Bessel function, second kind, order zero Returns modified Bessel function of the second kind of order zero of the argument. The range is partitioned into the two intervals [0,8] and (8, infinity). Chebyshev polynomial expansions are employed in each interval. ACCURACY: Tested at 2000 random points between 0 and 8. Peak absolute error (relative when K0 > 1) was 1.46e-14; rms, 4.26e-15. Relative error: arithmetic domain # trials peak rms IEEE 0, 30 30000 1.2e-15 1.6e-16 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 2000 by Stephen L. Moshier *************************************************************************/
public static double besselk0(double x)
/************************************************************************* Modified Bessel function, second kind, order one Computes the modified Bessel function of the second kind of order one of the argument. The range is partitioned into the two intervals [0,2] and (2, infinity). Chebyshev polynomial expansions are employed in each interval. ACCURACY: Relative error: arithmetic domain # trials peak rms IEEE 0, 30 30000 1.2e-15 1.6e-16 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 2000 by Stephen L. Moshier *************************************************************************/
public static double besselk1(double x)
/************************************************************************* Modified Bessel function, second kind, integer order Returns modified Bessel function of the second kind of order n of the argument. The range is partitioned into the two intervals [0,9.55] and (9.55, infinity). An ascending power series is used in the low range, and an asymptotic expansion in the high range. ACCURACY: Relative error: arithmetic domain # trials peak rms IEEE 0,30 90000 1.8e-8 3.0e-10 Error is high only near the crossover point x = 9.55 between the two expansions used. Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1988, 2000 by Stephen L. Moshier *************************************************************************/
public static double besselkn(int nn, double x)
/************************************************************************* Bessel function of the second kind, order zero Returns Bessel function of the second kind, of order zero, of the argument. The domain is divided into the intervals [0, 5] and (5, infinity). In the first interval a rational approximation R(x) is employed to compute y0(x) = R(x) + 2 * log(x) * j0(x) / PI. Thus a call to j0() is required. In the second interval, the Hankel asymptotic expansion is employed with two rational functions of degree 6/6 and 7/7. ACCURACY: Absolute error, when y0(x) < 1; else relative error: arithmetic domain # trials peak rms IEEE 0, 30 30000 1.3e-15 1.6e-16 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1989, 2000 by Stephen L. Moshier *************************************************************************/
public static double bessely0(double x)
/************************************************************************* Bessel function of second kind of order one Returns Bessel function of the second kind of order one of the argument. The domain is divided into the intervals [0, 8] and (8, infinity). In the first interval a 25 term Chebyshev expansion is used, and a call to j1() is required. In the second, the asymptotic trigonometric representation is employed using two rational functions of degree 5/5. ACCURACY: Absolute error: arithmetic domain # trials peak rms IEEE 0, 30 30000 1.0e-15 1.3e-16 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1989, 2000 by Stephen L. Moshier *************************************************************************/
public static double bessely1(double x)
/************************************************************************* Bessel function of second kind of integer order Returns Bessel function of order n, where n is a (possibly negative) integer. The function is evaluated by forward recurrence on n, starting with values computed by the routines y0() and y1(). If n = 0 or 1 the routine for y0 or y1 is called directly. ACCURACY: Absolute error, except relative when y > 1: arithmetic domain # trials peak rms IEEE 0, 30 30000 3.4e-15 4.3e-16 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 2000 by Stephen L. Moshier *************************************************************************/
public static double besselyn(int n, double x)
beta
/************************************************************************* Beta function - - | (a) | (b) beta( a, b ) = -----------. - | (a+b) For large arguments the logarithm of the function is evaluated using lgam(), then exponentiated. ACCURACY: Relative error: arithmetic domain # trials peak rms IEEE 0,30 30000 8.1e-14 1.1e-14 Cephes Math Library Release 2.0: April, 1987 Copyright 1984, 1987 by Stephen L. Moshier *************************************************************************/
public static double beta(double a, double b)
binomialcdistribution
binomialdistribution
invbinomialdistribution
/************************************************************************* Complemented binomial distribution Returns the sum of the terms k+1 through n of the Binomial probability density: n -- ( n ) j n-j > ( ) p (1-p) -- ( j ) j=k+1 The terms are not summed directly; instead the incomplete beta integral is employed, according to the formula y = bdtrc( k, n, p ) = incbet( k+1, n-k, p ). The arguments must be positive, with p ranging from 0 to 1. ACCURACY: Tested at random points (a,b,p). a,b Relative error: arithmetic domain # trials peak rms For p between 0.001 and 1: IEEE 0,100 100000 6.7e-15 8.2e-16 For p between 0 and .001: IEEE 0,100 100000 1.5e-13 2.7e-15 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1995, 2000 by Stephen L. Moshier *************************************************************************/
public static double binomialcdistribution(int k, int n, double p)
/************************************************************************* Binomial distribution Returns the sum of the terms 0 through k of the Binomial probability density: k -- ( n ) j n-j > ( ) p (1-p) -- ( j ) j=0 The terms are not summed directly; instead the incomplete beta integral is employed, according to the formula y = bdtr( k, n, p ) = incbet( n-k, k+1, 1-p ). The arguments must be positive, with p ranging from 0 to 1. ACCURACY: Tested at random points (a,b,p), with p between 0 and 1. a,b Relative error: arithmetic domain # trials peak rms For p between 0.001 and 1: IEEE 0,100 100000 4.3e-15 2.6e-16 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1995, 2000 by Stephen L. Moshier *************************************************************************/
public static double binomialdistribution(int k, int n, double p)
/************************************************************************* Inverse binomial distribution Finds the event probability p such that the sum of the terms 0 through k of the Binomial probability density is equal to the given cumulative probability y. This is accomplished using the inverse beta integral function and the relation 1 - p = incbi( n-k, k+1, y ). ACCURACY: Tested at random points (a,b,p). a,b Relative error: arithmetic domain # trials peak rms For p between 0.001 and 1: IEEE 0,100 100000 2.3e-14 6.4e-16 IEEE 0,10000 100000 6.6e-12 1.2e-13 For p between 10^-6 and 0.001: IEEE 0,100 100000 2.0e-12 1.3e-14 IEEE 0,10000 100000 1.5e-12 3.2e-14 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1995, 2000 by Stephen L. Moshier *************************************************************************/
public static double invbinomialdistribution(int k, int n, double y)
chebyshevcalculate
chebyshevcoefficients
chebyshevsum
fromchebyshev
/************************************************************************* Calculation of the value of the Chebyshev polynomials of the first and second kinds. Parameters: r - polynomial kind, either 1 or 2. n - degree, n>=0 x - argument, -1 <= x <= 1 Result: the value of the Chebyshev polynomial at x *************************************************************************/
public static double chebyshevcalculate(int r, int n, double x)
/************************************************************************* Representation of Tn as C[0] + C[1]*X + ... + C[N]*X^N Input parameters: N - polynomial degree, n>=0 Output parameters: C - coefficients *************************************************************************/
public static void chebyshevcoefficients(int n, out double[] c)
/************************************************************************* Summation of Chebyshev polynomials using Clenshaw’s recurrence formula. This routine calculates c[0]*T0(x) + c[1]*T1(x) + ... + c[N]*TN(x) or c[0]*U0(x) + c[1]*U1(x) + ... + c[N]*UN(x) depending on the R. Parameters: r - polynomial kind, either 1 or 2. n - degree, n>=0 x - argument Result: the value of the Chebyshev polynomial at x *************************************************************************/
public static double chebyshevsum(double[] c, int r, int n, double x)
/************************************************************************* Conversion of a series of Chebyshev polynomials to a power series. Represents A[0]*T0(x) + A[1]*T1(x) + ... + A[N]*Tn(x) as B[0] + B[1]*X + ... + B[N]*X^N. Input parameters: A - Chebyshev series coefficients N - degree, N>=0 Output parameters B - power series coefficients *************************************************************************/
public static void fromchebyshev(double[] a, int n, out double[] b)
chisquarecdistribution
chisquaredistribution
invchisquaredistribution
/************************************************************************* Complemented Chi-square distribution Returns the area under the right hand tail (from x to infinity) of the Chi square probability density function with v degrees of freedom: inf. - 1 | | v/2-1 -t/2 P( x | v ) = ----------- | t e dt v/2 - | | 2 | (v/2) - x where x is the Chi-square variable. The incomplete gamma integral is used, according to the formula y = chdtr( v, x ) = igamc( v/2.0, x/2.0 ). The arguments must both be positive. ACCURACY: See incomplete gamma function Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 2000 by Stephen L. Moshier *************************************************************************/
public static double chisquarecdistribution(double v, double x)
/************************************************************************* Chi-square distribution Returns the area under the left hand tail (from 0 to x) of the Chi square probability density function with v degrees of freedom. x - 1 | | v/2-1 -t/2 P( x | v ) = ----------- | t e dt v/2 - | | 2 | (v/2) - 0 where x is the Chi-square variable. The incomplete gamma integral is used, according to the formula y = chdtr( v, x ) = igam( v/2.0, x/2.0 ). The arguments must both be positive. ACCURACY: See incomplete gamma function Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 2000 by Stephen L. Moshier *************************************************************************/
public static double chisquaredistribution(double v, double x)
/************************************************************************* Inverse of complemented Chi-square distribution Finds the Chi-square argument x such that the integral from x to infinity of the Chi-square density is equal to the given cumulative probability y. This is accomplished using the inverse gamma integral function and the relation x/2 = igami( df/2, y ); ACCURACY: See inverse incomplete gamma function Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 2000 by Stephen L. Moshier *************************************************************************/
public static double invchisquaredistribution(double v, double y)
convc1d
convc1dcircular
convc1dcircularinv
convc1dinv
convr1d
convr1dcircular
convr1dcircularinv
convr1dinv
/************************************************************************* 1-dimensional complex convolution. For given A/B returns conv(A,B) (non-circular). Subroutine can automatically choose between three implementations: straightforward O(M*N) formula for very small N (or M), overlap-add algorithm for cases where max(M,N) is significantly larger than min(M,N), but O(M*N) algorithm is too slow, and general FFT-based formula for cases where two previois algorithms are too slow. Algorithm has max(M,N)*log(max(M,N)) complexity for any M/N. INPUT PARAMETERS A - array[0..M-1] - complex function to be transformed M - problem size B - array[0..N-1] - complex function to be transformed N - problem size OUTPUT PARAMETERS R - convolution: A*B. array[0..N+M-2]. NOTE: It is assumed that A is zero at T<0, B is zero too. If one or both functions have non-zero values at negative T's, you can still use this subroutine - just shift its result correspondingly. -- ALGLIB -- Copyright 21.07.2009 by Bochkanov Sergey *************************************************************************/
public static void convc1d( complex[] a, int m, complex[] b, int n, out complex[] r)
/************************************************************************* 1-dimensional circular complex convolution. For given S/R returns conv(S,R) (circular). Algorithm has linearithmic complexity for any M/N. IMPORTANT: normal convolution is commutative, i.e. it is symmetric - conv(A,B)=conv(B,A). Cyclic convolution IS NOT. One function - S - is a signal, periodic function, and another - R - is a response, non-periodic function with limited length. INPUT PARAMETERS S - array[0..M-1] - complex periodic signal M - problem size B - array[0..N-1] - complex non-periodic response N - problem size OUTPUT PARAMETERS R - convolution: A*B. array[0..M-1]. NOTE: It is assumed that B is zero at T<0. If it has non-zero values at negative T's, you can still use this subroutine - just shift its result correspondingly. -- ALGLIB -- Copyright 21.07.2009 by Bochkanov Sergey *************************************************************************/
public static void convc1dcircular( complex[] s, int m, complex[] r, int n, out complex[] c)
/************************************************************************* 1-dimensional circular complex deconvolution (inverse of ConvC1DCircular()). Algorithm has M*log(M)) complexity for any M (composite or prime). INPUT PARAMETERS A - array[0..M-1] - convolved periodic signal, A = conv(R, B) M - convolved signal length B - array[0..N-1] - non-periodic response N - response length OUTPUT PARAMETERS R - deconvolved signal. array[0..M-1]. NOTE: deconvolution is unstable process and may result in division by zero (if your response function is degenerate, i.e. has zero Fourier coefficient). NOTE: It is assumed that B is zero at T<0. If it has non-zero values at negative T's, you can still use this subroutine - just shift its result correspondingly. -- ALGLIB -- Copyright 21.07.2009 by Bochkanov Sergey *************************************************************************/
public static void convc1dcircularinv( complex[] a, int m, complex[] b, int n, out complex[] r)
/************************************************************************* 1-dimensional complex non-circular deconvolution (inverse of ConvC1D()). Algorithm has M*log(M)) complexity for any M (composite or prime). INPUT PARAMETERS A - array[0..M-1] - convolved signal, A = conv(R, B) M - convolved signal length B - array[0..N-1] - response N - response length, N<=M OUTPUT PARAMETERS R - deconvolved signal. array[0..M-N]. NOTE: deconvolution is unstable process and may result in division by zero (if your response function is degenerate, i.e. has zero Fourier coefficient). NOTE: It is assumed that A is zero at T<0, B is zero too. If one or both functions have non-zero values at negative T's, you can still use this subroutine - just shift its result correspondingly. -- ALGLIB -- Copyright 21.07.2009 by Bochkanov Sergey *************************************************************************/
public static void convc1dinv( complex[] a, int m, complex[] b, int n, out complex[] r)
/************************************************************************* 1-dimensional real convolution. Analogous to ConvC1D(), see ConvC1D() comments for more details. INPUT PARAMETERS A - array[0..M-1] - real function to be transformed M - problem size B - array[0..N-1] - real function to be transformed N - problem size OUTPUT PARAMETERS R - convolution: A*B. array[0..N+M-2]. NOTE: It is assumed that A is zero at T<0, B is zero too. If one or both functions have non-zero values at negative T's, you can still use this subroutine - just shift its result correspondingly. -- ALGLIB -- Copyright 21.07.2009 by Bochkanov Sergey *************************************************************************/
public static void convr1d( double[] a, int m, double[] b, int n, out double[] r)
/************************************************************************* 1-dimensional circular real convolution. Analogous to ConvC1DCircular(), see ConvC1DCircular() comments for more details. INPUT PARAMETERS S - array[0..M-1] - real signal M - problem size B - array[0..N-1] - real response N - problem size OUTPUT PARAMETERS R - convolution: A*B. array[0..M-1]. NOTE: It is assumed that B is zero at T<0. If it has non-zero values at negative T's, you can still use this subroutine - just shift its result correspondingly. -- ALGLIB -- Copyright 21.07.2009 by Bochkanov Sergey *************************************************************************/
public static void convr1dcircular( double[] s, int m, double[] r, int n, out double[] c)
/************************************************************************* 1-dimensional complex deconvolution (inverse of ConvC1D()). Algorithm has M*log(M)) complexity for any M (composite or prime). INPUT PARAMETERS A - array[0..M-1] - convolved signal, A = conv(R, B) M - convolved signal length B - array[0..N-1] - response N - response length OUTPUT PARAMETERS R - deconvolved signal. array[0..M-N]. NOTE: deconvolution is unstable process and may result in division by zero (if your response function is degenerate, i.e. has zero Fourier coefficient). NOTE: It is assumed that B is zero at T<0. If it has non-zero values at negative T's, you can still use this subroutine - just shift its result correspondingly. -- ALGLIB -- Copyright 21.07.2009 by Bochkanov Sergey *************************************************************************/
public static void convr1dcircularinv( double[] a, int m, double[] b, int n, out double[] r)
/************************************************************************* 1-dimensional real deconvolution (inverse of ConvC1D()). Algorithm has M*log(M)) complexity for any M (composite or prime). INPUT PARAMETERS A - array[0..M-1] - convolved signal, A = conv(R, B) M - convolved signal length B - array[0..N-1] - response N - response length, N<=M OUTPUT PARAMETERS R - deconvolved signal. array[0..M-N]. NOTE: deconvolution is unstable process and may result in division by zero (if your response function is degenerate, i.e. has zero Fourier coefficient). NOTE: It is assumed that A is zero at T<0, B is zero too. If one or both functions have non-zero values at negative T's, you can still use this subroutine - just shift its result correspondingly. -- ALGLIB -- Copyright 21.07.2009 by Bochkanov Sergey *************************************************************************/
public static void convr1dinv( double[] a, int m, double[] b, int n, out double[] r)
corrc1d
corrc1dcircular
corrr1d
corrr1dcircular
/************************************************************************* 1-dimensional complex cross-correlation. For given Pattern/Signal returns corr(Pattern,Signal) (non-circular). Correlation is calculated using reduction to convolution. Algorithm with max(N,N)*log(max(N,N)) complexity is used (see ConvC1D() for more info about performance). IMPORTANT: for historical reasons subroutine accepts its parameters in reversed order: CorrC1D(Signal, Pattern) = Pattern x Signal (using traditional definition of cross-correlation, denoting cross-correlation as "x"). INPUT PARAMETERS Signal - array[0..N-1] - complex function to be transformed, signal containing pattern N - problem size Pattern - array[0..M-1] - complex function to be transformed, pattern to search withing signal M - problem size OUTPUT PARAMETERS R - cross-correlation, array[0..N+M-2]: * positive lags are stored in R[0..N-1], R[i] = sum(conj(pattern[j])*signal[i+j] * negative lags are stored in R[N..N+M-2], R[N+M-1-i] = sum(conj(pattern[j])*signal[-i+j] NOTE: It is assumed that pattern domain is [0..M-1]. If Pattern is non-zero on [-K..M-1], you can still use this subroutine, just shift result by K. -- ALGLIB -- Copyright 21.07.2009 by Bochkanov Sergey *************************************************************************/
public static void corrc1d( complex[] signal, int n, complex[] pattern, int m, out complex[] r)
/************************************************************************* 1-dimensional circular complex cross-correlation. For given Pattern/Signal returns corr(Pattern,Signal) (circular). Algorithm has linearithmic complexity for any M/N. IMPORTANT: for historical reasons subroutine accepts its parameters in reversed order: CorrC1DCircular(Signal, Pattern) = Pattern x Signal (using traditional definition of cross-correlation, denoting cross-correlation as "x"). INPUT PARAMETERS Signal - array[0..N-1] - complex function to be transformed, periodic signal containing pattern N - problem size Pattern - array[0..M-1] - complex function to be transformed, non-periodic pattern to search withing signal M - problem size OUTPUT PARAMETERS R - convolution: A*B. array[0..M-1]. -- ALGLIB -- Copyright 21.07.2009 by Bochkanov Sergey *************************************************************************/
public static void corrc1dcircular( complex[] signal, int m, complex[] pattern, int n, out complex[] c)
/************************************************************************* 1-dimensional real cross-correlation. For given Pattern/Signal returns corr(Pattern,Signal) (non-circular). Correlation is calculated using reduction to convolution. Algorithm with max(N,N)*log(max(N,N)) complexity is used (see ConvC1D() for more info about performance). IMPORTANT: for historical reasons subroutine accepts its parameters in reversed order: CorrR1D(Signal, Pattern) = Pattern x Signal (using traditional definition of cross-correlation, denoting cross-correlation as "x"). INPUT PARAMETERS Signal - array[0..N-1] - real function to be transformed, signal containing pattern N - problem size Pattern - array[0..M-1] - real function to be transformed, pattern to search withing signal M - problem size OUTPUT PARAMETERS R - cross-correlation, array[0..N+M-2]: * positive lags are stored in R[0..N-1], R[i] = sum(pattern[j]*signal[i+j] * negative lags are stored in R[N..N+M-2], R[N+M-1-i] = sum(pattern[j]*signal[-i+j] NOTE: It is assumed that pattern domain is [0..M-1]. If Pattern is non-zero on [-K..M-1], you can still use this subroutine, just shift result by K. -- ALGLIB -- Copyright 21.07.2009 by Bochkanov Sergey *************************************************************************/
public static void corrr1d( double[] signal, int n, double[] pattern, int m, out double[] r)
/************************************************************************* 1-dimensional circular real cross-correlation. For given Pattern/Signal returns corr(Pattern,Signal) (circular). Algorithm has linearithmic complexity for any M/N. IMPORTANT: for historical reasons subroutine accepts its parameters in reversed order: CorrR1DCircular(Signal, Pattern) = Pattern x Signal (using traditional definition of cross-correlation, denoting cross-correlation as "x"). INPUT PARAMETERS Signal - array[0..N-1] - real function to be transformed, periodic signal containing pattern N - problem size Pattern - array[0..M-1] - real function to be transformed, non-periodic pattern to search withing signal M - problem size OUTPUT PARAMETERS R - convolution: A*B. array[0..M-1]. -- ALGLIB -- Copyright 21.07.2009 by Bochkanov Sergey *************************************************************************/
public static void corrr1dcircular( double[] signal, int m, double[] pattern, int n, out double[] c)
pearsoncorrelationsignificance
spearmanrankcorrelationsignificance
/************************************************************************* Pearson's correlation coefficient significance test This test checks hypotheses about whether X and Y are samples of two continuous distributions having zero correlation or whether their correlation is non-zero. The following tests are performed: * two-tailed test (null hypothesis - X and Y have zero correlation) * left-tailed test (null hypothesis - the correlation coefficient is greater than or equal to 0) * right-tailed test (null hypothesis - the correlation coefficient is less than or equal to 0). Requirements: * the number of elements in each sample is not less than 5 * normality of distributions of X and Y. Input parameters: R - Pearson's correlation coefficient for X and Y N - number of elements in samples, N>=5. Output parameters: BothTails - p-value for two-tailed test. If BothTails is less than the given significance level the null hypothesis is rejected. LeftTail - p-value for left-tailed test. If LeftTail is less than the given significance level, the null hypothesis is rejected. RightTail - p-value for right-tailed test. If RightTail is less than the given significance level the null hypothesis is rejected. -- ALGLIB -- Copyright 09.04.2007 by Bochkanov Sergey *************************************************************************/
public static void pearsoncorrelationsignificance( double r, int n, out double bothtails, out double lefttail, out double righttail)
/************************************************************************* Spearman's rank correlation coefficient significance test This test checks hypotheses about whether X and Y are samples of two continuous distributions having zero correlation or whether their correlation is non-zero. The following tests are performed: * two-tailed test (null hypothesis - X and Y have zero correlation) * left-tailed test (null hypothesis - the correlation coefficient is greater than or equal to 0) * right-tailed test (null hypothesis - the correlation coefficient is less than or equal to 0). Requirements: * the number of elements in each sample is not less than 5. The test is non-parametric and doesn't require distributions X and Y to be normal. Input parameters: R - Spearman's rank correlation coefficient for X and Y N - number of elements in samples, N>=5. Output parameters: BothTails - p-value for two-tailed test. If BothTails is less than the given significance level the null hypothesis is rejected. LeftTail - p-value for left-tailed test. If LeftTail is less than the given significance level, the null hypothesis is rejected. RightTail - p-value for right-tailed test. If RightTail is less than the given significance level the null hypothesis is rejected. -- ALGLIB -- Copyright 09.04.2007 by Bochkanov Sergey *************************************************************************/
public static void spearmanrankcorrelationsignificance( double r, int n, out double bothtails, out double lefttail, out double righttail)
dawsonintegral
/************************************************************************* Dawson's Integral Approximates the integral x - 2 | | 2 dawsn(x) = exp( -x ) | exp( t ) dt | | - 0 Three different rational approximations are employed, for the intervals 0 to 3.25; 3.25 to 6.25; and 6.25 up. ACCURACY: Relative error: arithmetic domain # trials peak rms IEEE 0,10 10000 6.9e-16 1.0e-16 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1989, 2000 by Stephen L. Moshier *************************************************************************/
public static double dawsonintegral(double x)
densesolverlsreport
densesolverreport
cmatrixlusolve
cmatrixlusolvem
cmatrixmixedsolve
cmatrixmixedsolvem
cmatrixsolve
cmatrixsolvem
hpdmatrixcholeskysolve
hpdmatrixcholeskysolvem
hpdmatrixsolve
hpdmatrixsolvem
rmatrixlusolve
rmatrixlusolvem
rmatrixmixedsolve
rmatrixmixedsolvem
rmatrixsolve
rmatrixsolvels
rmatrixsolvem
spdmatrixcholeskysolve
spdmatrixcholeskysolvem
spdmatrixsolve
spdmatrixsolvem
/************************************************************************* *************************************************************************/
public class densesolverlsreport { public double r2; public double[,] cx; public int n; public int k; }
/************************************************************************* *************************************************************************/
public class densesolverreport { public double r1; public double rinf; }
/************************************************************************* Dense solver. Same as RMatrixLUSolve(), but for complex matrices. Algorithm features: * automatic detection of degenerate cases * O(N^2) complexity * condition number estimation No iterative refinement is provided because exact form of original matrix is not known to subroutine. Use CMatrixSolve or CMatrixMixedSolve if you need iterative refinement. INPUT PARAMETERS LUA - array[0..N-1,0..N-1], LU decomposition, CMatrixLU result P - array[0..N-1], pivots array, CMatrixLU result N - size of A B - array[0..N-1], right part OUTPUT PARAMETERS Info - same as in RMatrixSolve Rep - same as in RMatrixSolve X - same as in RMatrixSolve -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/
public static void cmatrixlusolve( complex[,] lua, int[] p, int n, complex[] b, out int info, out densesolverreport rep, out complex[] x)
/************************************************************************* Dense solver. Same as RMatrixLUSolveM(), but for complex matrices. Algorithm features: * automatic detection of degenerate cases * O(M*N^2) complexity * condition number estimation No iterative refinement is provided because exact form of original matrix is not known to subroutine. Use CMatrixSolve or CMatrixMixedSolve if you need iterative refinement. INPUT PARAMETERS LUA - array[0..N-1,0..N-1], LU decomposition, RMatrixLU result P - array[0..N-1], pivots array, RMatrixLU result N - size of A B - array[0..N-1,0..M-1], right part M - right part size OUTPUT PARAMETERS Info - same as in RMatrixSolve Rep - same as in RMatrixSolve X - same as in RMatrixSolve -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/
public static void cmatrixlusolvem( complex[,] lua, int[] p, int n, complex[,] b, int m, out int info, out densesolverreport rep, out complex[,] x)
/************************************************************************* Dense solver. Same as RMatrixMixedSolve(), but for complex matrices. Algorithm features: * automatic detection of degenerate cases * condition number estimation * iterative refinement * O(N^2) complexity INPUT PARAMETERS A - array[0..N-1,0..N-1], system matrix LUA - array[0..N-1,0..N-1], LU decomposition, CMatrixLU result P - array[0..N-1], pivots array, CMatrixLU result N - size of A B - array[0..N-1], right part OUTPUT PARAMETERS Info - same as in RMatrixSolveM Rep - same as in RMatrixSolveM X - same as in RMatrixSolveM -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/
public static void cmatrixmixedsolve( complex[,] a, complex[,] lua, int[] p, int n, complex[] b, out int info, out densesolverreport rep, out complex[] x)
/************************************************************************* Dense solver. Same as RMatrixMixedSolveM(), but for complex matrices. Algorithm features: * automatic detection of degenerate cases * condition number estimation * iterative refinement * O(M*N^2) complexity INPUT PARAMETERS A - array[0..N-1,0..N-1], system matrix LUA - array[0..N-1,0..N-1], LU decomposition, CMatrixLU result P - array[0..N-1], pivots array, CMatrixLU result N - size of A B - array[0..N-1,0..M-1], right part M - right part size OUTPUT PARAMETERS Info - same as in RMatrixSolveM Rep - same as in RMatrixSolveM X - same as in RMatrixSolveM -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/
public static void cmatrixmixedsolvem( complex[,] a, complex[,] lua, int[] p, int n, complex[,] b, int m, out int info, out densesolverreport rep, out complex[,] x)
/************************************************************************* Dense solver. Same as RMatrixSolve(), but for complex matrices. Algorithm features: * automatic detection of degenerate cases * condition number estimation * iterative refinement * O(N^3) complexity INPUT PARAMETERS A - array[0..N-1,0..N-1], system matrix N - size of A B - array[0..N-1], right part OUTPUT PARAMETERS Info - same as in RMatrixSolve Rep - same as in RMatrixSolve X - same as in RMatrixSolve -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/
public static void cmatrixsolve( complex[,] a, int n, complex[] b, out int info, out densesolverreport rep, out complex[] x)
/************************************************************************* Dense solver. Same as RMatrixSolveM(), but for complex matrices. Algorithm features: * automatic detection of degenerate cases * condition number estimation * iterative refinement * O(N^3+M*N^2) complexity INPUT PARAMETERS A - array[0..N-1,0..N-1], system matrix N - size of A B - array[0..N-1,0..M-1], right part M - right part size RFS - iterative refinement switch: * True - refinement is used. Less performance, more precision. * False - refinement is not used. More performance, less precision. OUTPUT PARAMETERS Info - same as in RMatrixSolve Rep - same as in RMatrixSolve X - same as in RMatrixSolve -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/
public static void cmatrixsolvem( complex[,] a, int n, complex[,] b, int m, bool rfs, out int info, out densesolverreport rep, out complex[,] x)
/************************************************************************* Dense solver. Same as RMatrixLUSolve(), but for HPD matrices represented by their Cholesky decomposition. Algorithm features: * automatic detection of degenerate cases * O(N^2) complexity * condition number estimation * matrix is represented by its upper or lower triangle No iterative refinement is provided because such partial representation of matrix does not allow efficient calculation of extra-precise matrix-vector products for large matrices. Use RMatrixSolve or RMatrixMixedSolve if you need iterative refinement. INPUT PARAMETERS CHA - array[0..N-1,0..N-1], Cholesky decomposition, SPDMatrixCholesky result N - size of A IsUpper - what half of CHA is provided B - array[0..N-1], right part OUTPUT PARAMETERS Info - same as in RMatrixSolve Rep - same as in RMatrixSolve X - same as in RMatrixSolve -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/
public static void hpdmatrixcholeskysolve( complex[,] cha, int n, bool isupper, complex[] b, out int info, out densesolverreport rep, out complex[] x)
/************************************************************************* Dense solver. Same as RMatrixLUSolveM(), but for HPD matrices represented by their Cholesky decomposition. Algorithm features: * automatic detection of degenerate cases * O(M*N^2) complexity * condition number estimation * matrix is represented by its upper or lower triangle No iterative refinement is provided because such partial representation of matrix does not allow efficient calculation of extra-precise matrix-vector products for large matrices. Use RMatrixSolve or RMatrixMixedSolve if you need iterative refinement. INPUT PARAMETERS CHA - array[0..N-1,0..N-1], Cholesky decomposition, HPDMatrixCholesky result N - size of CHA IsUpper - what half of CHA is provided B - array[0..N-1,0..M-1], right part M - right part size OUTPUT PARAMETERS Info - same as in RMatrixSolve Rep - same as in RMatrixSolve X - same as in RMatrixSolve -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/
public static void hpdmatrixcholeskysolvem( complex[,] cha, int n, bool isupper, complex[,] b, int m, out int info, out densesolverreport rep, out complex[,] x)
/************************************************************************* Dense solver. Same as RMatrixSolve(), but for Hermitian positive definite matrices. Algorithm features: * automatic detection of degenerate cases * condition number estimation * O(N^3) complexity * matrix is represented by its upper or lower triangle No iterative refinement is provided because such partial representation of matrix does not allow efficient calculation of extra-precise matrix-vector products for large matrices. Use RMatrixSolve or RMatrixMixedSolve if you need iterative refinement. INPUT PARAMETERS A - array[0..N-1,0..N-1], system matrix N - size of A IsUpper - what half of A is provided B - array[0..N-1], right part OUTPUT PARAMETERS Info - same as in RMatrixSolve Returns -3 for non-HPD matrices. Rep - same as in RMatrixSolve X - same as in RMatrixSolve -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/
public static void hpdmatrixsolve( complex[,] a, int n, bool isupper, complex[] b, out int info, out densesolverreport rep, out complex[] x)
/************************************************************************* Dense solver. Same as RMatrixSolveM(), but for Hermitian positive definite matrices. Algorithm features: * automatic detection of degenerate cases * condition number estimation * O(N^3+M*N^2) complexity * matrix is represented by its upper or lower triangle No iterative refinement is provided because such partial representation of matrix does not allow efficient calculation of extra-precise matrix-vector products for large matrices. Use RMatrixSolve or RMatrixMixedSolve if you need iterative refinement. INPUT PARAMETERS A - array[0..N-1,0..N-1], system matrix N - size of A IsUpper - what half of A is provided B - array[0..N-1,0..M-1], right part M - right part size OUTPUT PARAMETERS Info - same as in RMatrixSolve. Returns -3 for non-HPD matrices. Rep - same as in RMatrixSolve X - same as in RMatrixSolve -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/
public static void hpdmatrixsolvem( complex[,] a, int n, bool isupper, complex[,] b, int m, out int info, out densesolverreport rep, out complex[,] x)
/************************************************************************* Dense solver. This subroutine solves a system A*X=B, where A is NxN non-denegerate real matrix given by its LU decomposition, X and B are NxM real matrices. Algorithm features: * automatic detection of degenerate cases * O(N^2) complexity * condition number estimation No iterative refinement is provided because exact form of original matrix is not known to subroutine. Use RMatrixSolve or RMatrixMixedSolve if you need iterative refinement. INPUT PARAMETERS LUA - array[0..N-1,0..N-1], LU decomposition, RMatrixLU result P - array[0..N-1], pivots array, RMatrixLU result N - size of A B - array[0..N-1], right part OUTPUT PARAMETERS Info - same as in RMatrixSolve Rep - same as in RMatrixSolve X - same as in RMatrixSolve -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/
public static void rmatrixlusolve( double[,] lua, int[] p, int n, double[] b, out int info, out densesolverreport rep, out double[] x)
/************************************************************************* Dense solver. Similar to RMatrixLUSolve() but solves task with multiple right parts (where b and x are NxM matrices). Algorithm features: * automatic detection of degenerate cases * O(M*N^2) complexity * condition number estimation No iterative refinement is provided because exact form of original matrix is not known to subroutine. Use RMatrixSolve or RMatrixMixedSolve if you need iterative refinement. INPUT PARAMETERS LUA - array[0..N-1,0..N-1], LU decomposition, RMatrixLU result P - array[0..N-1], pivots array, RMatrixLU result N - size of A B - array[0..N-1,0..M-1], right part M - right part size OUTPUT PARAMETERS Info - same as in RMatrixSolve Rep - same as in RMatrixSolve X - same as in RMatrixSolve -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/
public static void rmatrixlusolvem( double[,] lua, int[] p, int n, double[,] b, int m, out int info, out densesolverreport rep, out double[,] x)
/************************************************************************* Dense solver. This subroutine solves a system A*x=b, where BOTH ORIGINAL A AND ITS LU DECOMPOSITION ARE KNOWN. You can use it if for some reasons you have both A and its LU decomposition. Algorithm features: * automatic detection of degenerate cases * condition number estimation * iterative refinement * O(N^2) complexity INPUT PARAMETERS A - array[0..N-1,0..N-1], system matrix LUA - array[0..N-1,0..N-1], LU decomposition, RMatrixLU result P - array[0..N-1], pivots array, RMatrixLU result N - size of A B - array[0..N-1], right part OUTPUT PARAMETERS Info - same as in RMatrixSolveM Rep - same as in RMatrixSolveM X - same as in RMatrixSolveM -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/
public static void rmatrixmixedsolve( double[,] a, double[,] lua, int[] p, int n, double[] b, out int info, out densesolverreport rep, out double[] x)
/************************************************************************* Dense solver. Similar to RMatrixMixedSolve() but solves task with multiple right parts (where b and x are NxM matrices). Algorithm features: * automatic detection of degenerate cases * condition number estimation * iterative refinement * O(M*N^2) complexity INPUT PARAMETERS A - array[0..N-1,0..N-1], system matrix LUA - array[0..N-1,0..N-1], LU decomposition, RMatrixLU result P - array[0..N-1], pivots array, RMatrixLU result N - size of A B - array[0..N-1,0..M-1], right part M - right part size OUTPUT PARAMETERS Info - same as in RMatrixSolveM Rep - same as in RMatrixSolveM X - same as in RMatrixSolveM -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/
public static void rmatrixmixedsolvem( double[,] a, double[,] lua, int[] p, int n, double[,] b, int m, out int info, out densesolverreport rep, out double[,] x)
/************************************************************************* Dense solver. This subroutine solves a system A*x=b, where A is NxN non-denegerate real matrix, x and b are vectors. Algorithm features: * automatic detection of degenerate cases * condition number estimation * iterative refinement * O(N^3) complexity INPUT PARAMETERS A - array[0..N-1,0..N-1], system matrix N - size of A B - array[0..N-1], right part OUTPUT PARAMETERS Info - return code: * -3 A is singular, or VERY close to singular. X is filled by zeros in such cases. * -1 N<=0 was passed * 1 task is solved (but matrix A may be ill-conditioned, check R1/RInf parameters for condition numbers). Rep - solver report, see below for more info X - array[0..N-1], it contains: * solution of A*x=b if A is non-singular (well-conditioned or ill-conditioned, but not very close to singular) * zeros, if A is singular or VERY close to singular (in this case Info=-3). SOLVER REPORT Subroutine sets following fields of the Rep structure: * R1 reciprocal of condition number: 1/cond(A), 1-norm. * RInf reciprocal of condition number: 1/cond(A), inf-norm. -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/
public static void rmatrixsolve( double[,] a, int n, double[] b, out int info, out densesolverreport rep, out double[] x)
/************************************************************************* Dense solver. This subroutine finds solution of the linear system A*X=B with non-square, possibly degenerate A. System is solved in the least squares sense, and general least squares solution X = X0 + CX*y which minimizes |A*X-B| is returned. If A is non-degenerate, solution in the usual sense is returned Algorithm features: * automatic detection of degenerate cases * iterative refinement * O(N^3) complexity INPUT PARAMETERS A - array[0..NRows-1,0..NCols-1], system matrix NRows - vertical size of A NCols - horizontal size of A B - array[0..NCols-1], right part Threshold- a number in [0,1]. Singular values beyond Threshold are considered zero. Set it to 0.0, if you don't understand what it means, so the solver will choose good value on its own. OUTPUT PARAMETERS Info - return code: * -4 SVD subroutine failed * -1 if NRows<=0 or NCols<=0 or Threshold<0 was passed * 1 if task is solved Rep - solver report, see below for more info X - array[0..N-1,0..M-1], it contains: * solution of A*X=B if A is non-singular (well-conditioned or ill-conditioned, but not very close to singular) * zeros, if A is singular or VERY close to singular (in this case Info=-3). SOLVER REPORT Subroutine sets following fields of the Rep structure: * R2 reciprocal of condition number: 1/cond(A), 2-norm. * N = NCols * K dim(Null(A)) * CX array[0..N-1,0..K-1], kernel of A. Columns of CX store such vectors that A*CX[i]=0. -- ALGLIB -- Copyright 24.08.2009 by Bochkanov Sergey *************************************************************************/
public static void rmatrixsolvels( double[,] a, int nrows, int ncols, double[] b, double threshold, out int info, out densesolverlsreport rep, out double[] x)
/************************************************************************* Dense solver. Similar to RMatrixSolve() but solves task with multiple right parts (where b and x are NxM matrices). Algorithm features: * automatic detection of degenerate cases * condition number estimation * optional iterative refinement * O(N^3+M*N^2) complexity INPUT PARAMETERS A - array[0..N-1,0..N-1], system matrix N - size of A B - array[0..N-1,0..M-1], right part M - right part size RFS - iterative refinement switch: * True - refinement is used. Less performance, more precision. * False - refinement is not used. More performance, less precision. OUTPUT PARAMETERS Info - same as in RMatrixSolve Rep - same as in RMatrixSolve X - same as in RMatrixSolve -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/
public static void rmatrixsolvem( double[,] a, int n, double[,] b, int m, bool rfs, out int info, out densesolverreport rep, out double[,] x)
/************************************************************************* Dense solver. Same as RMatrixLUSolve(), but for SPD matrices represented by their Cholesky decomposition. Algorithm features: * automatic detection of degenerate cases * O(N^2) complexity * condition number estimation * matrix is represented by its upper or lower triangle No iterative refinement is provided because such partial representation of matrix does not allow efficient calculation of extra-precise matrix-vector products for large matrices. Use RMatrixSolve or RMatrixMixedSolve if you need iterative refinement. INPUT PARAMETERS CHA - array[0..N-1,0..N-1], Cholesky decomposition, SPDMatrixCholesky result N - size of A IsUpper - what half of CHA is provided B - array[0..N-1], right part OUTPUT PARAMETERS Info - same as in RMatrixSolve Rep - same as in RMatrixSolve X - same as in RMatrixSolve -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/
public static void spdmatrixcholeskysolve( double[,] cha, int n, bool isupper, double[] b, out int info, out densesolverreport rep, out double[] x)
/************************************************************************* Dense solver. Same as RMatrixLUSolveM(), but for SPD matrices represented by their Cholesky decomposition. Algorithm features: * automatic detection of degenerate cases * O(M*N^2) complexity * condition number estimation * matrix is represented by its upper or lower triangle No iterative refinement is provided because such partial representation of matrix does not allow efficient calculation of extra-precise matrix-vector products for large matrices. Use RMatrixSolve or RMatrixMixedSolve if you need iterative refinement. INPUT PARAMETERS CHA - array[0..N-1,0..N-1], Cholesky decomposition, SPDMatrixCholesky result N - size of CHA IsUpper - what half of CHA is provided B - array[0..N-1,0..M-1], right part M - right part size OUTPUT PARAMETERS Info - same as in RMatrixSolve Rep - same as in RMatrixSolve X - same as in RMatrixSolve -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/
public static void spdmatrixcholeskysolvem( double[,] cha, int n, bool isupper, double[,] b, int m, out int info, out densesolverreport rep, out double[,] x)
/************************************************************************* Dense solver. Same as RMatrixSolve(), but for SPD matrices. Algorithm features: * automatic detection of degenerate cases * condition number estimation * O(N^3) complexity * matrix is represented by its upper or lower triangle No iterative refinement is provided because such partial representation of matrix does not allow efficient calculation of extra-precise matrix-vector products for large matrices. Use RMatrixSolve or RMatrixMixedSolve if you need iterative refinement. INPUT PARAMETERS A - array[0..N-1,0..N-1], system matrix N - size of A IsUpper - what half of A is provided B - array[0..N-1], right part OUTPUT PARAMETERS Info - same as in RMatrixSolve Returns -3 for non-SPD matrices. Rep - same as in RMatrixSolve X - same as in RMatrixSolve -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/
public static void spdmatrixsolve( double[,] a, int n, bool isupper, double[] b, out int info, out densesolverreport rep, out double[] x)
/************************************************************************* Dense solver. Same as RMatrixSolveM(), but for symmetric positive definite matrices. Algorithm features: * automatic detection of degenerate cases * condition number estimation * O(N^3+M*N^2) complexity * matrix is represented by its upper or lower triangle No iterative refinement is provided because such partial representation of matrix does not allow efficient calculation of extra-precise matrix-vector products for large matrices. Use RMatrixSolve or RMatrixMixedSolve if you need iterative refinement. INPUT PARAMETERS A - array[0..N-1,0..N-1], system matrix N - size of A IsUpper - what half of A is provided B - array[0..N-1,0..M-1], right part M - right part size OUTPUT PARAMETERS Info - same as in RMatrixSolve. Returns -3 for non-SPD matrices. Rep - same as in RMatrixSolve X - same as in RMatrixSolve -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/
public static void spdmatrixsolvem( double[,] a, int n, bool isupper, double[,] b, int m, out int info, out densesolverreport rep, out double[,] x)
decisionforest
dfreport
dfavgce
dfavgerror
dfavgrelerror
dfbuildrandomdecisionforest
dfbuildrandomdecisionforestx1
dfprocess
dfprocessi
dfrelclserror
dfrmserror
dfserialize
dfunserialize
/************************************************************************* *************************************************************************/
public class decisionforest { }
/************************************************************************* *************************************************************************/
public class dfreport { public double relclserror; public double avgce; public double rmserror; public double avgerror; public double avgrelerror; public double oobrelclserror; public double oobavgce; public double oobrmserror; public double oobavgerror; public double oobavgrelerror; }
/************************************************************************* Average cross-entropy (in bits per element) on the test set INPUT PARAMETERS: DF - decision forest model XY - test set NPoints - test set size RESULT: CrossEntropy/(NPoints*LN(2)). Zero if model solves regression task. -- ALGLIB -- Copyright 16.02.2009 by Bochkanov Sergey *************************************************************************/
public static double dfavgce(decisionforest df, double[,] xy, int npoints)
/************************************************************************* Average error on the test set INPUT PARAMETERS: DF - decision forest model XY - test set NPoints - test set size RESULT: Its meaning for regression task is obvious. As for classification task, it means average error when estimating posterior probabilities. -- ALGLIB -- Copyright 16.02.2009 by Bochkanov Sergey *************************************************************************/
public static double dfavgerror( decisionforest df, double[,] xy, int npoints)
/************************************************************************* Average relative error on the test set INPUT PARAMETERS: DF - decision forest model XY - test set NPoints - test set size RESULT: Its meaning for regression task is obvious. As for classification task, it means average relative error when estimating posterior probability of belonging to the correct class. -- ALGLIB -- Copyright 16.02.2009 by Bochkanov Sergey *************************************************************************/
public static double dfavgrelerror( decisionforest df, double[,] xy, int npoints)
/************************************************************************* This subroutine builds random decision forest. INPUT PARAMETERS: XY - training set NPoints - training set size, NPoints>=1 NVars - number of independent variables, NVars>=1 NClasses - task type: * NClasses=1 - regression task with one dependent variable * NClasses>1 - classification task with NClasses classes. NTrees - number of trees in a forest, NTrees>=1. recommended values: 50-100. R - percent of a training set used to build individual trees. 0<R<=1. recommended values: 0.1 <= R <= 0.66. OUTPUT PARAMETERS: Info - return code: * -2, if there is a point with class number outside of [0..NClasses-1]. * -1, if incorrect parameters was passed (NPoints<1, NVars<1, NClasses<1, NTrees<1, R<=0 or R>1). * 1, if task has been solved DF - model built Rep - training report, contains error on a training set and out-of-bag estimates of generalization error. -- ALGLIB -- Copyright 19.02.2009 by Bochkanov Sergey *************************************************************************/
public static void dfbuildrandomdecisionforest( double[,] xy, int npoints, int nvars, int nclasses, int ntrees, double r, out int info, out decisionforest df, out dfreport rep)
/************************************************************************* This subroutine builds random decision forest. This function gives ability to tune number of variables used when choosing best split. INPUT PARAMETERS: XY - training set NPoints - training set size, NPoints>=1 NVars - number of independent variables, NVars>=1 NClasses - task type: * NClasses=1 - regression task with one dependent variable * NClasses>1 - classification task with NClasses classes. NTrees - number of trees in a forest, NTrees>=1. recommended values: 50-100. NRndVars - number of variables used when choosing best split R - percent of a training set used to build individual trees. 0<R<=1. recommended values: 0.1 <= R <= 0.66. OUTPUT PARAMETERS: Info - return code: * -2, if there is a point with class number outside of [0..NClasses-1]. * -1, if incorrect parameters was passed (NPoints<1, NVars<1, NClasses<1, NTrees<1, R<=0 or R>1). * 1, if task has been solved DF - model built Rep - training report, contains error on a training set and out-of-bag estimates of generalization error. -- ALGLIB -- Copyright 19.02.2009 by Bochkanov Sergey *************************************************************************/
public static void dfbuildrandomdecisionforestx1( double[,] xy, int npoints, int nvars, int nclasses, int ntrees, int nrndvars, double r, out int info, out decisionforest df, out dfreport rep)
/************************************************************************* Procesing INPUT PARAMETERS: DF - decision forest model X - input vector, array[0..NVars-1]. OUTPUT PARAMETERS: Y - result. Regression estimate when solving regression task, vector of posterior probabilities for classification task. See also DFProcessI. -- ALGLIB -- Copyright 16.02.2009 by Bochkanov Sergey *************************************************************************/
public static void dfprocess( decisionforest df, double[] x, ref double[] y)
/************************************************************************* 'interactive' variant of DFProcess for languages like Python which support constructs like "Y = DFProcessI(DF,X)" and interactive mode of interpreter This function allocates new array on each call, so it is significantly slower than its 'non-interactive' counterpart, but it is more convenient when you call it from command line. -- ALGLIB -- Copyright 28.02.2010 by Bochkanov Sergey *************************************************************************/
public static void dfprocessi( decisionforest df, double[] x, out double[] y)
/************************************************************************* Relative classification error on the test set INPUT PARAMETERS: DF - decision forest model XY - test set NPoints - test set size RESULT: percent of incorrectly classified cases. Zero if model solves regression task. -- ALGLIB -- Copyright 16.02.2009 by Bochkanov Sergey *************************************************************************/
public static double dfrelclserror( decisionforest df, double[,] xy, int npoints)
/************************************************************************* RMS error on the test set INPUT PARAMETERS: DF - decision forest model XY - test set NPoints - test set size RESULT: root mean square error. Its meaning for regression task is obvious. As for classification task, RMS error means error when estimating posterior probabilities. -- ALGLIB -- Copyright 16.02.2009 by Bochkanov Sergey *************************************************************************/
public static double dfrmserror( decisionforest df, double[,] xy, int npoints)
/************************************************************************* This function serializes data structure to string. Important properties of s_out: * it contains alphanumeric characters, dots, underscores, minus signs * these symbols are grouped into words, which are separated by spaces and Windows-style (CR+LF) newlines * although serializer uses spaces and CR+LF as separators, you can replace any separator character by arbitrary combination of spaces, tabs, Windows or Unix newlines. It allows flexible reformatting of the string in case you want to include it into text or XML file. But you should not insert separators into the middle of the "words" nor you should change case of letters. * s_out can be freely moved between 32-bit and 64-bit systems, little and big endian machines, and so on. You can serialize structure on 32-bit machine and unserialize it on 64-bit one (or vice versa), or serialize it on SPARC and unserialize on x86. You can also serialize it in C++ version of ALGLIB and unserialize in C# one, and vice versa. *************************************************************************/
public static void dfserialize(decisionforest obj, out string s_out)
/************************************************************************* This function unserializes data structure from string. *************************************************************************/
public static void dfunserialize(string s_in, out decisionforest obj)
ellipticintegrale
ellipticintegralk
ellipticintegralkhighprecision
incompleteellipticintegrale
incompleteellipticintegralk
/************************************************************************* Complete elliptic integral of the second kind Approximates the integral pi/2 - | | 2 E(m) = | sqrt( 1 - m sin t ) dt | | - 0 using the approximation P(x) - x log x Q(x). ACCURACY: Relative error: arithmetic domain # trials peak rms IEEE 0, 1 10000 2.1e-16 7.3e-17 Cephes Math Library, Release 2.8: June, 2000 Copyright 1984, 1987, 1989, 2000 by Stephen L. Moshier *************************************************************************/
public static double ellipticintegrale(double m)
/************************************************************************* Complete elliptic integral of the first kind Approximates the integral pi/2 - | | | dt K(m) = | ------------------ | 2 | | sqrt( 1 - m sin t ) - 0 using the approximation P(x) - log x Q(x). ACCURACY: Relative error: arithmetic domain # trials peak rms IEEE 0,1 30000 2.5e-16 6.8e-17 Cephes Math Library, Release 2.8: June, 2000 Copyright 1984, 1987, 2000 by Stephen L. Moshier *************************************************************************/
public static double ellipticintegralk(double m)
/************************************************************************* Complete elliptic integral of the first kind Approximates the integral pi/2 - | | | dt K(m) = | ------------------ | 2 | | sqrt( 1 - m sin t ) - 0 where m = 1 - m1, using the approximation P(x) - log x Q(x). The argument m1 is used rather than m so that the logarithmic singularity at m = 1 will be shifted to the origin; this preserves maximum accuracy. K(0) = pi/2. ACCURACY: Relative error: arithmetic domain # trials peak rms IEEE 0,1 30000 2.5e-16 6.8e-17 Àëãîðèòì âçÿò èç áèáëèîòåêè Cephes *************************************************************************/
public static double ellipticintegralkhighprecision(double m1)
/************************************************************************* Incomplete elliptic integral of the second kind Approximates the integral phi - | | | 2 E(phi_\m) = | sqrt( 1 - m sin t ) dt | | | - 0 of amplitude phi and modulus m, using the arithmetic - geometric mean algorithm. ACCURACY: Tested at random arguments with phi in [-10, 10] and m in [0, 1]. Relative error: arithmetic domain # trials peak rms IEEE -10,10 150000 3.3e-15 1.4e-16 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1993, 2000 by Stephen L. Moshier *************************************************************************/
public static double incompleteellipticintegrale(double phi, double m)
/************************************************************************* Incomplete elliptic integral of the first kind F(phi|m) Approximates the integral phi - | | | dt F(phi_\m) = | ------------------ | 2 | | sqrt( 1 - m sin t ) - 0 of amplitude phi and modulus m, using the arithmetic - geometric mean algorithm. ACCURACY: Tested at random points with m in [0, 1] and phi as indicated. Relative error: arithmetic domain # trials peak rms IEEE -10,10 200000 7.4e-16 1.0e-16 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 2000 by Stephen L. Moshier *************************************************************************/
public static double incompleteellipticintegralk(double phi, double m)
hmatrixevd
hmatrixevdi
hmatrixevdr
rmatrixevd
smatrixevd
smatrixevdi
smatrixevdr
smatrixtdevd
smatrixtdevdi
smatrixtdevdr
/************************************************************************* Finding the eigenvalues and eigenvectors of a Hermitian matrix The algorithm finds eigen pairs of a Hermitian matrix by reducing it to real tridiagonal form and using the QL/QR algorithm. Input parameters: A - Hermitian matrix which is given by its upper or lower triangular part. Array whose indexes range within [0..N-1, 0..N-1]. N - size of matrix A. IsUpper - storage format. ZNeeded - flag controlling whether the eigenvectors are needed or not. If ZNeeded is equal to: * 0, the eigenvectors are not returned; * 1, the eigenvectors are returned. Output parameters: D - eigenvalues in ascending order. Array whose index ranges within [0..N-1]. Z - if ZNeeded is equal to: * 0, Z hasn’t changed; * 1, Z contains the eigenvectors. Array whose indexes range within [0..N-1, 0..N-1]. The eigenvectors are stored in the matrix columns. Result: True, if the algorithm has converged. False, if the algorithm hasn't converged (rare case). Note: eigenvectors of Hermitian matrix are defined up to multiplication by a complex number L, such that |L|=1. -- ALGLIB -- Copyright 2005, 23 March 2007 by Bochkanov Sergey *************************************************************************/
public static bool hmatrixevd( complex[,] a, int n, int zneeded, bool isupper, out double[] d, out complex[,] z)
/************************************************************************* Subroutine for finding the eigenvalues and eigenvectors of a Hermitian matrix with given indexes by using bisection and inverse iteration methods Input parameters: A - Hermitian matrix which is given by its upper or lower triangular part. Array whose indexes range within [0..N-1, 0..N-1]. N - size of matrix A. ZNeeded - flag controlling whether the eigenvectors are needed or not. If ZNeeded is equal to: * 0, the eigenvectors are not returned; * 1, the eigenvectors are returned. IsUpperA - storage format of matrix A. I1, I2 - index interval for searching (from I1 to I2). 0 <= I1 <= I2 <= N-1. Output parameters: W - array of the eigenvalues found. Array whose index ranges within [0..I2-I1]. Z - if ZNeeded is equal to: * 0, Z hasn’t changed; * 1, Z contains eigenvectors. Array whose indexes range within [0..N-1, 0..I2-I1]. In that case, the eigenvectors are stored in the matrix columns. Result: True, if successful. W contains the eigenvalues, Z contains the eigenvectors (if needed). False, if the bisection method subroutine wasn't able to find the eigenvalues in the given interval or if the inverse iteration subroutine wasn't able to find all the corresponding eigenvectors. In that case, the eigenvalues and eigenvectors are not returned. Note: eigen vectors of Hermitian matrix are defined up to multiplication by a complex number L, such as |L|=1. -- ALGLIB -- Copyright 07.01.2006, 24.03.2007 by Bochkanov Sergey. *************************************************************************/
public static bool hmatrixevdi( complex[,] a, int n, int zneeded, bool isupper, int i1, int i2, out double[] w, out complex[,] z)
/************************************************************************* Subroutine for finding the eigenvalues (and eigenvectors) of a Hermitian matrix in a given half-interval (A, B] by using a bisection and inverse iteration Input parameters: A - Hermitian matrix which is given by its upper or lower triangular part. Array whose indexes range within [0..N-1, 0..N-1]. N - size of matrix A. ZNeeded - flag controlling whether the eigenvectors are needed or not. If ZNeeded is equal to: * 0, the eigenvectors are not returned; * 1, the eigenvectors are returned. IsUpperA - storage format of matrix A. B1, B2 - half-interval (B1, B2] to search eigenvalues in. Output parameters: M - number of eigenvalues found in a given half-interval, M>=0 W - array of the eigenvalues found. Array whose index ranges within [0..M-1]. Z - if ZNeeded is equal to: * 0, Z hasn’t changed; * 1, Z contains eigenvectors. Array whose indexes range within [0..N-1, 0..M-1]. The eigenvectors are stored in the matrix columns. Result: True, if successful. M contains the number of eigenvalues in the given half-interval (could be equal to 0), W contains the eigenvalues, Z contains the eigenvectors (if needed). False, if the bisection method subroutine wasn't able to find the eigenvalues in the given interval or if the inverse iteration subroutine wasn't able to find all the corresponding eigenvectors. In that case, the eigenvalues and eigenvectors are not returned, M is equal to 0. Note: eigen vectors of Hermitian matrix are defined up to multiplication by a complex number L, such as |L|=1. -- ALGLIB -- Copyright 07.01.2006, 24.03.2007 by Bochkanov Sergey. *************************************************************************/
public static bool hmatrixevdr( complex[,] a, int n, int zneeded, bool isupper, double b1, double b2, out int m, out double[] w, out complex[,] z)
/************************************************************************* Finding eigenvalues and eigenvectors of a general matrix The algorithm finds eigenvalues and eigenvectors of a general matrix by using the QR algorithm with multiple shifts. The algorithm can find eigenvalues and both left and right eigenvectors. The right eigenvector is a vector x such that A*x = w*x, and the left eigenvector is a vector y such that y'*A = w*y' (here y' implies a complex conjugate transposition of vector y). Input parameters: A - matrix. Array whose indexes range within [0..N-1, 0..N-1]. N - size of matrix A. VNeeded - flag controlling whether eigenvectors are needed or not. If VNeeded is equal to: * 0, eigenvectors are not returned; * 1, right eigenvectors are returned; * 2, left eigenvectors are returned; * 3, both left and right eigenvectors are returned. Output parameters: WR - real parts of eigenvalues. Array whose index ranges within [0..N-1]. WR - imaginary parts of eigenvalues. Array whose index ranges within [0..N-1]. VL, VR - arrays of left and right eigenvectors (if they are needed). If WI[i]=0, the respective eigenvalue is a real number, and it corresponds to the column number I of matrices VL/VR. If WI[i]>0, we have a pair of complex conjugate numbers with positive and negative imaginary parts: the first eigenvalue WR[i] + sqrt(-1)*WI[i]; the second eigenvalue WR[i+1] + sqrt(-1)*WI[i+1]; WI[i]>0 WI[i+1] = -WI[i] < 0 In that case, the eigenvector corresponding to the first eigenvalue is located in i and i+1 columns of matrices VL/VR (the column number i contains the real part, and the column number i+1 contains the imaginary part), and the vector corresponding to the second eigenvalue is a complex conjugate to the first vector. Arrays whose indexes range within [0..N-1, 0..N-1]. Result: True, if the algorithm has converged. False, if the algorithm has not converged. Note 1: Some users may ask the following question: what if WI[N-1]>0? WI[N] must contain an eigenvalue which is complex conjugate to the N-th eigenvalue, but the array has only size N? The answer is as follows: such a situation cannot occur because the algorithm finds a pairs of eigenvalues, therefore, if WI[i]>0, I is strictly less than N-1. Note 2: The algorithm performance depends on the value of the internal parameter NS of the InternalSchurDecomposition subroutine which defines the number of shifts in the QR algorithm (similarly to the block width in block-matrix algorithms of linear algebra). If you require maximum performance on your machine, it is recommended to adjust this parameter manually. See also the InternalTREVC subroutine. The algorithm is based on the LAPACK 3.0 library. *************************************************************************/
public static bool rmatrixevd( double[,] a, int n, int vneeded, out double[] wr, out double[] wi, out double[,] vl, out double[,] vr)
/************************************************************************* Finding the eigenvalues and eigenvectors of a symmetric matrix The algorithm finds eigen pairs of a symmetric matrix by reducing it to tridiagonal form and using the QL/QR algorithm. Input parameters: A - symmetric matrix which is given by its upper or lower triangular part. Array whose indexes range within [0..N-1, 0..N-1]. N - size of matrix A. ZNeeded - flag controlling whether the eigenvectors are needed or not. If ZNeeded is equal to: * 0, the eigenvectors are not returned; * 1, the eigenvectors are returned. IsUpper - storage format. Output parameters: D - eigenvalues in ascending order. Array whose index ranges within [0..N-1]. Z - if ZNeeded is equal to: * 0, Z hasn’t changed; * 1, Z contains the eigenvectors. Array whose indexes range within [0..N-1, 0..N-1]. The eigenvectors are stored in the matrix columns. Result: True, if the algorithm has converged. False, if the algorithm hasn't converged (rare case). -- ALGLIB -- Copyright 2005-2008 by Bochkanov Sergey *************************************************************************/
public static bool smatrixevd( double[,] a, int n, int zneeded, bool isupper, out double[] d, out double[,] z)
/************************************************************************* Subroutine for finding the eigenvalues and eigenvectors of a symmetric matrix with given indexes by using bisection and inverse iteration methods. Input parameters: A - symmetric matrix which is given by its upper or lower triangular part. Array whose indexes range within [0..N-1, 0..N-1]. N - size of matrix A. ZNeeded - flag controlling whether the eigenvectors are needed or not. If ZNeeded is equal to: * 0, the eigenvectors are not returned; * 1, the eigenvectors are returned. IsUpperA - storage format of matrix A. I1, I2 - index interval for searching (from I1 to I2). 0 <= I1 <= I2 <= N-1. Output parameters: W - array of the eigenvalues found. Array whose index ranges within [0..I2-I1]. Z - if ZNeeded is equal to: * 0, Z hasn’t changed; * 1, Z contains eigenvectors. Array whose indexes range within [0..N-1, 0..I2-I1]. In that case, the eigenvectors are stored in the matrix columns. Result: True, if successful. W contains the eigenvalues, Z contains the eigenvectors (if needed). False, if the bisection method subroutine wasn't able to find the eigenvalues in the given interval or if the inverse iteration subroutine wasn't able to find all the corresponding eigenvectors. In that case, the eigenvalues and eigenvectors are not returned. -- ALGLIB -- Copyright 07.01.2006 by Bochkanov Sergey *************************************************************************/
public static bool smatrixevdi( double[,] a, int n, int zneeded, bool isupper, int i1, int i2, out double[] w, out double[,] z)
/************************************************************************* Subroutine for finding the eigenvalues (and eigenvectors) of a symmetric matrix in a given half open interval (A, B] by using a bisection and inverse iteration Input parameters: A - symmetric matrix which is given by its upper or lower triangular part. Array [0..N-1, 0..N-1]. N - size of matrix A. ZNeeded - flag controlling whether the eigenvectors are needed or not. If ZNeeded is equal to: * 0, the eigenvectors are not returned; * 1, the eigenvectors are returned. IsUpperA - storage format of matrix A. B1, B2 - half open interval (B1, B2] to search eigenvalues in. Output parameters: M - number of eigenvalues found in a given half-interval (M>=0). W - array of the eigenvalues found. Array whose index ranges within [0..M-1]. Z - if ZNeeded is equal to: * 0, Z hasn’t changed; * 1, Z contains eigenvectors. Array whose indexes range within [0..N-1, 0..M-1]. The eigenvectors are stored in the matrix columns. Result: True, if successful. M contains the number of eigenvalues in the given half-interval (could be equal to 0), W contains the eigenvalues, Z contains the eigenvectors (if needed). False, if the bisection method subroutine wasn't able to find the eigenvalues in the given interval or if the inverse iteration subroutine wasn't able to find all the corresponding eigenvectors. In that case, the eigenvalues and eigenvectors are not returned, M is equal to 0. -- ALGLIB -- Copyright 07.01.2006 by Bochkanov Sergey *************************************************************************/
public static bool smatrixevdr( double[,] a, int n, int zneeded, bool isupper, double b1, double b2, out int m, out double[] w, out double[,] z)
/************************************************************************* Finding the eigenvalues and eigenvectors of a tridiagonal symmetric matrix The algorithm finds the eigen pairs of a tridiagonal symmetric matrix by using an QL/QR algorithm with implicit shifts. Input parameters: D - the main diagonal of a tridiagonal matrix. Array whose index ranges within [0..N-1]. E - the secondary diagonal of a tridiagonal matrix. Array whose index ranges within [0..N-2]. N - size of matrix A. ZNeeded - flag controlling whether the eigenvectors are needed or not. If ZNeeded is equal to: * 0, the eigenvectors are not needed; * 1, the eigenvectors of a tridiagonal matrix are multiplied by the square matrix Z. It is used if the tridiagonal matrix is obtained by the similarity transformation of a symmetric matrix; * 2, the eigenvectors of a tridiagonal matrix replace the square matrix Z; * 3, matrix Z contains the first row of the eigenvectors matrix. Z - if ZNeeded=1, Z contains the square matrix by which the eigenvectors are multiplied. Array whose indexes range within [0..N-1, 0..N-1]. Output parameters: D - eigenvalues in ascending order. Array whose index ranges within [0..N-1]. Z - if ZNeeded is equal to: * 0, Z hasn’t changed; * 1, Z contains the product of a given matrix (from the left) and the eigenvectors matrix (from the right); * 2, Z contains the eigenvectors. * 3, Z contains the first row of the eigenvectors matrix. If ZNeeded<3, Z is the array whose indexes range within [0..N-1, 0..N-1]. In that case, the eigenvectors are stored in the matrix columns. If ZNeeded=3, Z is the array whose indexes range within [0..0, 0..N-1]. Result: True, if the algorithm has converged. False, if the algorithm hasn't converged. -- LAPACK routine (version 3.0) -- Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., Courant Institute, Argonne National Lab, and Rice University September 30, 1994 *************************************************************************/
public static bool smatrixtdevd( ref double[] d, double[] e, int n, int zneeded, ref double[,] z)
/************************************************************************* Subroutine for finding tridiagonal matrix eigenvalues/vectors with given indexes (in ascending order) by using the bisection and inverse iteraion. Input parameters: D - the main diagonal of a tridiagonal matrix. Array whose index ranges within [0..N-1]. E - the secondary diagonal of a tridiagonal matrix. Array whose index ranges within [0..N-2]. N - size of matrix. N>=0. ZNeeded - flag controlling whether the eigenvectors are needed or not. If ZNeeded is equal to: * 0, the eigenvectors are not needed; * 1, the eigenvectors of a tridiagonal matrix are multiplied by the square matrix Z. It is used if the tridiagonal matrix is obtained by the similarity transformation of a symmetric matrix. * 2, the eigenvectors of a tridiagonal matrix replace matrix Z. I1, I2 - index interval for searching (from I1 to I2). 0 <= I1 <= I2 <= N-1. Z - if ZNeeded is equal to: * 0, Z isn't used and remains unchanged; * 1, Z contains the square matrix (array whose indexes range within [0..N-1, 0..N-1]) which reduces the given symmetric matrix to tridiagonal form; * 2, Z isn't used (but changed on the exit). Output parameters: D - array of the eigenvalues found. Array whose index ranges within [0..I2-I1]. Z - if ZNeeded is equal to: * 0, doesn't contain any information; * 1, contains the product of a given NxN matrix Z (from the left) and Nx(I2-I1) matrix of the eigenvectors found (from the right). Array whose indexes range within [0..N-1, 0..I2-I1]. * 2, contains the matrix of the eigenvalues found. Array whose indexes range within [0..N-1, 0..I2-I1]. Result: True, if successful. In that case, D contains the eigenvalues, Z contains the eigenvectors (if needed). It should be noted that the subroutine changes the size of arrays D and Z. False, if the bisection method subroutine wasn't able to find the eigenvalues in the given interval or if the inverse iteration subroutine wasn't able to find all the corresponding eigenvectors. In that case, the eigenvalues and eigenvectors are not returned. -- ALGLIB -- Copyright 25.12.2005 by Bochkanov Sergey *************************************************************************/
public static bool smatrixtdevdi( ref double[] d, double[] e, int n, int zneeded, int i1, int i2, ref double[,] z)
/************************************************************************* Subroutine for finding the tridiagonal matrix eigenvalues/vectors in a given half-interval (A, B] by using bisection and inverse iteration. Input parameters: D - the main diagonal of a tridiagonal matrix. Array whose index ranges within [0..N-1]. E - the secondary diagonal of a tridiagonal matrix. Array whose index ranges within [0..N-2]. N - size of matrix, N>=0. ZNeeded - flag controlling whether the eigenvectors are needed or not. If ZNeeded is equal to: * 0, the eigenvectors are not needed; * 1, the eigenvectors of a tridiagonal matrix are multiplied by the square matrix Z. It is used if the tridiagonal matrix is obtained by the similarity transformation of a symmetric matrix. * 2, the eigenvectors of a tridiagonal matrix replace matrix Z. A, B - half-interval (A, B] to search eigenvalues in. Z - if ZNeeded is equal to: * 0, Z isn't used and remains unchanged; * 1, Z contains the square matrix (array whose indexes range within [0..N-1, 0..N-1]) which reduces the given symmetric matrix to tridiagonal form; * 2, Z isn't used (but changed on the exit). Output parameters: D - array of the eigenvalues found. Array whose index ranges within [0..M-1]. M - number of eigenvalues found in the given half-interval (M>=0). Z - if ZNeeded is equal to: * 0, doesn't contain any information; * 1, contains the product of a given NxN matrix Z (from the left) and NxM matrix of the eigenvectors found (from the right). Array whose indexes range within [0..N-1, 0..M-1]. * 2, contains the matrix of the eigenvectors found. Array whose indexes range within [0..N-1, 0..M-1]. Result: True, if successful. In that case, M contains the number of eigenvalues in the given half-interval (could be equal to 0), D contains the eigenvalues, Z contains the eigenvectors (if needed). It should be noted that the subroutine changes the size of arrays D and Z. False, if the bisection method subroutine wasn't able to find the eigenvalues in the given interval or if the inverse iteration subroutine wasn't able to find all the corresponding eigenvectors. In that case, the eigenvalues and eigenvectors are not returned, M is equal to 0. -- ALGLIB -- Copyright 31.03.2008 by Bochkanov Sergey *************************************************************************/
public static bool smatrixtdevdr( ref double[] d, double[] e, int n, int zneeded, double a, double b, out int m, ref double[,] z)
exponentialintegralei
exponentialintegralen
/************************************************************************* Exponential integral Ei(x) x - t | | e Ei(x) = -|- --- dt . | | t - -inf Not defined for x <= 0. See also expn.c. ACCURACY: Relative error: arithmetic domain # trials peak rms IEEE 0,100 50000 8.6e-16 1.3e-16 Cephes Math Library Release 2.8: May, 1999 Copyright 1999 by Stephen L. Moshier *************************************************************************/
public static double exponentialintegralei(double x)
/************************************************************************* Exponential integral En(x) Evaluates the exponential integral inf. - | | -xt | e E (x) = | ---- dt. n | n | | t - 1 Both n and x must be nonnegative. The routine employs either a power series, a continued fraction, or an asymptotic formula depending on the relative values of n and x. ACCURACY: Relative error: arithmetic domain # trials peak rms IEEE 0, 30 10000 1.7e-15 3.6e-16 Cephes Math Library Release 2.8: June, 2000 Copyright 1985, 2000 by Stephen L. Moshier *************************************************************************/
public static double exponentialintegralen(double x, int n)
fcdistribution
fdistribution
invfdistribution
/************************************************************************* Complemented F distribution Returns the area from x to infinity under the F density function (also known as Snedcor's density or the variance ratio density). inf. - 1 | | a-1 b-1 1-P(x) = ------ | t (1-t) dt B(a,b) | | - x The incomplete beta integral is used, according to the formula P(x) = incbet( df2/2, df1/2, (df2/(df2 + df1*x) ). ACCURACY: Tested at random points (a,b,x) in the indicated intervals. x a,b Relative error: arithmetic domain domain # trials peak rms IEEE 0,1 1,100 100000 3.7e-14 5.9e-16 IEEE 1,5 1,100 100000 8.0e-15 1.6e-15 IEEE 0,1 1,10000 100000 1.8e-11 3.5e-13 IEEE 1,5 1,10000 100000 2.0e-11 3.0e-12 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1995, 2000 by Stephen L. Moshier *************************************************************************/
public static double fcdistribution(int a, int b, double x)
/************************************************************************* F distribution Returns the area from zero to x under the F density function (also known as Snedcor's density or the variance ratio density). This is the density of x = (u1/df1)/(u2/df2), where u1 and u2 are random variables having Chi square distributions with df1 and df2 degrees of freedom, respectively. The incomplete beta integral is used, according to the formula P(x) = incbet( df1/2, df2/2, (df1*x/(df2 + df1*x) ). The arguments a and b are greater than zero, and x is nonnegative. ACCURACY: Tested at random points (a,b,x). x a,b Relative error: arithmetic domain domain # trials peak rms IEEE 0,1 0,100 100000 9.8e-15 1.7e-15 IEEE 1,5 0,100 100000 6.5e-15 3.5e-16 IEEE 0,1 1,10000 100000 2.2e-11 3.3e-12 IEEE 1,5 1,10000 100000 1.1e-11 1.7e-13 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1995, 2000 by Stephen L. Moshier *************************************************************************/
public static double fdistribution(int a, int b, double x)
/************************************************************************* Inverse of complemented F distribution Finds the F density argument x such that the integral from x to infinity of the F density is equal to the given probability p. This is accomplished using the inverse beta integral function and the relations z = incbi( df2/2, df1/2, p ) x = df2 (1-z) / (df1 z). Note: the following relations hold for the inverse of the uncomplemented F distribution: z = incbi( df1/2, df2/2, p ) x = df2 z / (df1 (1-z)). ACCURACY: Tested at random points (a,b,p). a,b Relative error: arithmetic domain # trials peak rms For p between .001 and 1: IEEE 1,100 100000 8.3e-15 4.7e-16 IEEE 1,10000 100000 2.1e-11 1.4e-13 For p between 10^-6 and 10^-3: IEEE 1,100 50000 1.3e-12 8.4e-15 IEEE 1,10000 50000 3.0e-12 4.8e-14 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1995, 2000 by Stephen L. Moshier *************************************************************************/
public static double invfdistribution(int a, int b, double y)
fftc1d
fftc1dinv
fftr1d
fftr1dinv
fft_complex_d1 Complex FFT: simple example
fft_complex_d2 Complex FFT: advanced example
fft_real_d1 Real FFT: simple example
fft_real_d2 Real FFT: advanced example
/************************************************************************* 1-dimensional complex FFT. Array size N may be arbitrary number (composite or prime). Composite N's are handled with cache-oblivious variation of a Cooley-Tukey algorithm. Small prime-factors are transformed using hard coded codelets (similar to FFTW codelets, but without low-level optimization), large prime-factors are handled with Bluestein's algorithm. Fastests transforms are for smooth N's (prime factors are 2, 3, 5 only), most fast for powers of 2. When N have prime factors larger than these, but orders of magnitude smaller than N, computations will be about 4 times slower than for nearby highly composite N's. When N itself is prime, speed will be 6 times lower. Algorithm has O(N*logN) complexity for any N (composite or prime). INPUT PARAMETERS A - array[0..N-1] - complex function to be transformed N - problem size OUTPUT PARAMETERS A - DFT of a input array, array[0..N-1] A_out[j] = SUM(A_in[k]*exp(-2*pi*sqrt(-1)*j*k/N), k = 0..N-1) -- ALGLIB -- Copyright 29.05.2009 by Bochkanov Sergey *************************************************************************/
public static void fftc1d(ref complex[] a) public static void fftc1d(ref complex[] a, int n)

Examples:   [1]  [2]  

/************************************************************************* 1-dimensional complex inverse FFT. Array size N may be arbitrary number (composite or prime). Algorithm has O(N*logN) complexity for any N (composite or prime). See FFTC1D() description for more information about algorithm performance. INPUT PARAMETERS A - array[0..N-1] - complex array to be transformed N - problem size OUTPUT PARAMETERS A - inverse DFT of a input array, array[0..N-1] A_out[j] = SUM(A_in[k]/N*exp(+2*pi*sqrt(-1)*j*k/N), k = 0..N-1) -- ALGLIB -- Copyright 29.05.2009 by Bochkanov Sergey *************************************************************************/
public static void fftc1dinv(ref complex[] a) public static void fftc1dinv(ref complex[] a, int n)

Examples:   [1]  [2]  

/************************************************************************* 1-dimensional real FFT. Algorithm has O(N*logN) complexity for any N (composite or prime). INPUT PARAMETERS A - array[0..N-1] - real function to be transformed N - problem size OUTPUT PARAMETERS F - DFT of a input array, array[0..N-1] F[j] = SUM(A[k]*exp(-2*pi*sqrt(-1)*j*k/N), k = 0..N-1) NOTE: F[] satisfies symmetry property F[k] = conj(F[N-k]), so just one half of array is usually needed. But for convinience subroutine returns full complex array (with frequencies above N/2), so its result may be used by other FFT-related subroutines. -- ALGLIB -- Copyright 01.06.2009 by Bochkanov Sergey *************************************************************************/
public static void fftr1d(double[] a, out complex[] f) public static void fftr1d(double[] a, int n, out complex[] f)

Examples:   [1]  [2]  

/************************************************************************* 1-dimensional real inverse FFT. Algorithm has O(N*logN) complexity for any N (composite or prime). INPUT PARAMETERS F - array[0..floor(N/2)] - frequencies from forward real FFT N - problem size OUTPUT PARAMETERS A - inverse DFT of a input array, array[0..N-1] NOTE: F[] should satisfy symmetry property F[k] = conj(F[N-k]), so just one half of frequencies array is needed - elements from 0 to floor(N/2). F[0] is ALWAYS real. If N is even F[floor(N/2)] is real too. If N is odd, then F[floor(N/2)] has no special properties. Relying on properties noted above, FFTR1DInv subroutine uses only elements from 0th to floor(N/2)-th. It ignores imaginary part of F[0], and in case N is even it ignores imaginary part of F[floor(N/2)] too. When you call this function using full arguments list - "FFTR1DInv(F,N,A)" - you can pass either either frequencies array with N elements or reduced array with roughly N/2 elements - subroutine will successfully transform both. If you call this function using reduced arguments list - "FFTR1DInv(F,A)" - you must pass FULL array with N elements (although higher N/2 are still not used) because array size is used to automatically determine FFT length -- ALGLIB -- Copyright 01.06.2009 by Bochkanov Sergey *************************************************************************/
public static void fftr1dinv(complex[] f, out double[] a) public static void fftr1dinv(complex[] f, int n, out double[] a)

Examples:   [1]  [2]  


public static int Main(string[] args)
{
    //
    // first we demonstrate forward FFT:
    // [1i,1i,1i,1i] is converted to [4i, 0, 0, 0]
    //
    alglib.complex[] z = new alglib.complex[]{new alglib.complex(0,1),new alglib.complex(0,1),new alglib.complex(0,1),new alglib.complex(0,1)};
    alglib.fftc1d(ref z);
    System.Console.WriteLine("{0}", alglib.ap.format(z,3)); // EXPECTED: [4i,0,0,0]

    //
    // now we convert [4i, 0, 0, 0] back to [1i,1i,1i,1i]
    // with backward FFT
    //
    alglib.fftc1dinv(ref z);
    System.Console.WriteLine("{0}", alglib.ap.format(z,3)); // EXPECTED: [1i,1i,1i,1i]
    System.Console.ReadLine();
    return 0;
}



public static int Main(string[] args)
{
    //
    // first we demonstrate forward FFT:
    // [0,1,0,1i] is converted to [1+1i, -1-1i, -1-1i, 1+1i]
    //
    alglib.complex[] z = new alglib.complex[]{0,1,0,new alglib.complex(0,1)};
    alglib.fftc1d(ref z);
    System.Console.WriteLine("{0}", alglib.ap.format(z,3)); // EXPECTED: [1+1i, -1-1i, -1-1i, 1+1i]

    //
    // now we convert result back with backward FFT
    //
    alglib.fftc1dinv(ref z);
    System.Console.WriteLine("{0}", alglib.ap.format(z,3)); // EXPECTED: [0,1,0,1i]
    System.Console.ReadLine();
    return 0;
}



public static int Main(string[] args)
{
    //
    // first we demonstrate forward FFT:
    // [1,1,1,1] is converted to [4, 0, 0, 0]
    //
    double[] x = new double[]{1,1,1,1};
    alglib.complex[] f;
    double[] x2;
    alglib.fftr1d(x, out f);
    System.Console.WriteLine("{0}", alglib.ap.format(f,3)); // EXPECTED: [4,0,0,0]

    //
    // now we convert [4, 0, 0, 0] back to [1,1,1,1]
    // with backward FFT
    //
    alglib.fftr1dinv(f, out x2);
    System.Console.WriteLine("{0}", alglib.ap.format(x2,3)); // EXPECTED: [1,1,1,1]
    System.Console.ReadLine();
    return 0;
}



public static int Main(string[] args)
{
    //
    // first we demonstrate forward FFT:
    // [1,2,3,4] is converted to [10, -2+2i, -2, -2-2i]
    //
    // note that output array is self-adjoint:
    // * f[0] = conj(f[0])
    // * f[1] = conj(f[3])
    // * f[2] = conj(f[2])
    //
    double[] x = new double[]{1,2,3,4};
    alglib.complex[] f;
    double[] x2;
    alglib.fftr1d(x, out f);
    System.Console.WriteLine("{0}", alglib.ap.format(f,3)); // EXPECTED: [10, -2+2i, -2, -2-2i]

    //
    // now we convert [10, -2+2i, -2, -2-2i] back to [1,2,3,4]
    //
    alglib.fftr1dinv(f, out x2);
    System.Console.WriteLine("{0}", alglib.ap.format(x2,3)); // EXPECTED: [1,2,3,4]

    //
    // remember that F is self-adjoint? It means that we can pass just half
    // (slightly larger than half) of F to inverse real FFT and still get our result.
    //
    // I.e. instead [10, -2+2i, -2, -2-2i] we pass just [10, -2+2i, -2] and everything works!
    //
    // NOTE: in this case we should explicitly pass array length (which is 4) to ALGLIB;
    // if not, it will automatically use array length to determine FFT size and
    // will erroneously make half-length FFT.
    //
    f = new alglib.complex[]{10,new alglib.complex(-2,+2),-2};
    alglib.fftr1dinv(f, 4, out x2);
    System.Console.WriteLine("{0}", alglib.ap.format(x2,3)); // EXPECTED: [1,2,3,4]
    System.Console.ReadLine();
    return 0;
}


fhtr1d
fhtr1dinv
/************************************************************************* 1-dimensional Fast Hartley Transform. Algorithm has O(N*logN) complexity for any N (composite or prime). INPUT PARAMETERS A - array[0..N-1] - real function to be transformed N - problem size OUTPUT PARAMETERS A - FHT of a input array, array[0..N-1], A_out[k] = sum(A_in[j]*(cos(2*pi*j*k/N)+sin(2*pi*j*k/N)), j=0..N-1) -- ALGLIB -- Copyright 04.06.2009 by Bochkanov Sergey *************************************************************************/
public static void fhtr1d(ref double[] a, int n)
/************************************************************************* 1-dimensional inverse FHT. Algorithm has O(N*logN) complexity for any N (composite or prime). INPUT PARAMETERS A - array[0..N-1] - complex array to be transformed N - problem size OUTPUT PARAMETERS A - inverse FHT of a input array, array[0..N-1] -- ALGLIB -- Copyright 29.05.2009 by Bochkanov Sergey *************************************************************************/
public static void fhtr1dinv(ref double[] a, int n)
fresnelintegral
/************************************************************************* Fresnel integral Evaluates the Fresnel integrals x - | | C(x) = | cos(pi/2 t**2) dt, | | - 0 x - | | S(x) = | sin(pi/2 t**2) dt. | | - 0 The integrals are evaluated by a power series for x < 1. For x >= 1 auxiliary functions f(x) and g(x) are employed such that C(x) = 0.5 + f(x) sin( pi/2 x**2 ) - g(x) cos( pi/2 x**2 ) S(x) = 0.5 - f(x) cos( pi/2 x**2 ) - g(x) sin( pi/2 x**2 ) ACCURACY: Relative error. Arithmetic function domain # trials peak rms IEEE S(x) 0, 10 10000 2.0e-15 3.2e-16 IEEE C(x) 0, 10 10000 1.8e-15 3.3e-16 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1989, 2000 by Stephen L. Moshier *************************************************************************/
public static void fresnelintegral(double x, ref double c, ref double s)
gammafunction
lngamma
/************************************************************************* Gamma function Input parameters: X - argument Domain: 0 < X < 171.6 -170 < X < 0, X is not an integer. Relative error: arithmetic domain # trials peak rms IEEE -170,-33 20000 2.3e-15 3.3e-16 IEEE -33, 33 20000 9.4e-16 2.2e-16 IEEE 33, 171.6 20000 2.3e-15 3.2e-16 Cephes Math Library Release 2.8: June, 2000 Original copyright 1984, 1987, 1989, 1992, 2000 by Stephen L. Moshier Translated to AlgoPascal by Bochkanov Sergey (2005, 2006, 2007). *************************************************************************/
public static double gammafunction(double x)
/************************************************************************* Natural logarithm of gamma function Input parameters: X - argument Result: logarithm of the absolute value of the Gamma(X). Output parameters: SgnGam - sign(Gamma(X)) Domain: 0 < X < 2.55e305 -2.55e305 < X < 0, X is not an integer. ACCURACY: arithmetic domain # trials peak rms IEEE 0, 3 28000 5.4e-16 1.1e-16 IEEE 2.718, 2.556e305 40000 3.5e-16 8.3e-17 The error criterion was relative when the function magnitude was greater than one but absolute when it was less than one. The following test used the relative error criterion, though at certain points the relative error could be much higher than indicated. IEEE -200, -4 10000 4.8e-16 1.3e-16 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1989, 1992, 2000 by Stephen L. Moshier Translated to AlgoPascal by Bochkanov Sergey (2005, 2006, 2007). *************************************************************************/
public static double lngamma(double x, out double sgngam)
gkqgenerategaussjacobi
gkqgenerategausslegendre
gkqgeneraterec
gkqlegendrecalc
gkqlegendretbl
/************************************************************************* Returns Gauss and Gauss-Kronrod nodes/weights for Gauss-Jacobi quadrature on [-1,1] with weight function W(x)=Power(1-x,Alpha)*Power(1+x,Beta). INPUT PARAMETERS: N - number of Kronrod nodes, must be odd number, >=3. Alpha - power-law coefficient, Alpha>-1 Beta - power-law coefficient, Beta>-1 OUTPUT PARAMETERS: Info - error code: * -5 no real and positive Gauss-Kronrod formula can be created for such a weight function with a given number of nodes. * -4 an error was detected when calculating weights/nodes. Alpha or Beta are too close to -1 to obtain weights/nodes with high enough accuracy, or, may be, N is too large. Try to use multiple precision version. * -3 internal eigenproblem solver hasn't converged * -1 incorrect N was passed * +1 OK * +2 OK, but quadrature rule have exterior nodes, x[0]<-1 or x[n-1]>+1 X - array[0..N-1] - array of quadrature nodes, ordered in ascending order. WKronrod - array[0..N-1] - Kronrod weights WGauss - array[0..N-1] - Gauss weights (interleaved with zeros corresponding to extended Kronrod nodes). -- ALGLIB -- Copyright 12.05.2009 by Bochkanov Sergey *************************************************************************/
public static void gkqgenerategaussjacobi( int n, double alpha, double beta, out int info, out double[] x, out double[] wkronrod, out double[] wgauss)
/************************************************************************* Returns Gauss and Gauss-Kronrod nodes/weights for Gauss-Legendre quadrature with N points. GKQLegendreCalc (calculation) or GKQLegendreTbl (precomputed table) is used depending on machine precision and number of nodes. INPUT PARAMETERS: N - number of Kronrod nodes, must be odd number, >=3. OUTPUT PARAMETERS: Info - error code: * -4 an error was detected when calculating weights/nodes. N is too large to obtain weights/nodes with high enough accuracy. Try to use multiple precision version. * -3 internal eigenproblem solver hasn't converged * -1 incorrect N was passed * +1 OK X - array[0..N-1] - array of quadrature nodes, ordered in ascending order. WKronrod - array[0..N-1] - Kronrod weights WGauss - array[0..N-1] - Gauss weights (interleaved with zeros corresponding to extended Kronrod nodes). -- ALGLIB -- Copyright 12.05.2009 by Bochkanov Sergey *************************************************************************/
public static void gkqgenerategausslegendre( int n, out int info, out double[] x, out double[] wkronrod, out double[] wgauss)
/************************************************************************* Computation of nodes and weights of a Gauss-Kronrod quadrature formula The algorithm generates the N-point Gauss-Kronrod quadrature formula with weight function given by coefficients alpha and beta of a recurrence relation which generates a system of orthogonal polynomials: P-1(x) = 0 P0(x) = 1 Pn+1(x) = (x-alpha(n))*Pn(x) - beta(n)*Pn-1(x) and zero moment Mu0 Mu0 = integral(W(x)dx,a,b) INPUT PARAMETERS: Alpha – alpha coefficients, array[0..floor(3*K/2)]. Beta – beta coefficients, array[0..ceil(3*K/2)]. Beta[0] is not used and may be arbitrary. Beta[I]>0. Mu0 – zeroth moment of the weight function. N – number of nodes of the Gauss-Kronrod quadrature formula, N >= 3, N = 2*K+1. OUTPUT PARAMETERS: Info - error code: * -5 no real and positive Gauss-Kronrod formula can be created for such a weight function with a given number of nodes. * -4 N is too large, task may be ill conditioned - x[i]=x[i+1] found. * -3 internal eigenproblem solver hasn't converged * -2 Beta[i]<=0 * -1 incorrect N was passed * +1 OK X - array[0..N-1] - array of quadrature nodes, in ascending order. WKronrod - array[0..N-1] - Kronrod weights WGauss - array[0..N-1] - Gauss weights (interleaved with zeros corresponding to extended Kronrod nodes). -- ALGLIB -- Copyright 08.05.2009 by Bochkanov Sergey *************************************************************************/
public static void gkqgeneraterec( double[] alpha, double[] beta, double mu0, int n, out int info, out double[] x, out double[] wkronrod, out double[] wgauss)
/************************************************************************* Returns Gauss and Gauss-Kronrod nodes for quadrature with N points. Reduction to tridiagonal eigenproblem is used. INPUT PARAMETERS: N - number of Kronrod nodes, must be odd number, >=3. OUTPUT PARAMETERS: Info - error code: * -4 an error was detected when calculating weights/nodes. N is too large to obtain weights/nodes with high enough accuracy. Try to use multiple precision version. * -3 internal eigenproblem solver hasn't converged * -1 incorrect N was passed * +1 OK X - array[0..N-1] - array of quadrature nodes, ordered in ascending order. WKronrod - array[0..N-1] - Kronrod weights WGauss - array[0..N-1] - Gauss weights (interleaved with zeros corresponding to extended Kronrod nodes). -- ALGLIB -- Copyright 12.05.2009 by Bochkanov Sergey *************************************************************************/
public static void gkqlegendrecalc( int n, out int info, out double[] x, out double[] wkronrod, out double[] wgauss)
/************************************************************************* Returns Gauss and Gauss-Kronrod nodes for quadrature with N points using pre-calculated table. Nodes/weights were computed with accuracy up to 1.0E-32 (if MPFR version of ALGLIB is used). In standard double precision accuracy reduces to something about 2.0E-16 (depending on your compiler's handling of long floating point constants). INPUT PARAMETERS: N - number of Kronrod nodes. N can be 15, 21, 31, 41, 51, 61. OUTPUT PARAMETERS: X - array[0..N-1] - array of quadrature nodes, ordered in ascending order. WKronrod - array[0..N-1] - Kronrod weights WGauss - array[0..N-1] - Gauss weights (interleaved with zeros corresponding to extended Kronrod nodes). -- ALGLIB -- Copyright 12.05.2009 by Bochkanov Sergey *************************************************************************/
public static void gkqlegendretbl( int n, out double[] x, out double[] wkronrod, out double[] wgauss, out double eps)
gqgenerategausshermite
gqgenerategaussjacobi
gqgenerategausslaguerre
gqgenerategausslegendre
gqgenerategausslobattorec
gqgenerategaussradaurec
gqgeneraterec
/************************************************************************* Returns nodes/weights for Gauss-Hermite quadrature on (-inf,+inf) with weight function W(x)=Exp(-x*x) INPUT PARAMETERS: N - number of nodes, >=1 OUTPUT PARAMETERS: Info - error code: * -4 an error was detected when calculating weights/nodes. May be, N is too large. Try to use multiple precision version. * -3 internal eigenproblem solver hasn't converged * -1 incorrect N/Alpha was passed * +1 OK X - array[0..N-1] - array of quadrature nodes, in ascending order. W - array[0..N-1] - array of quadrature weights. -- ALGLIB -- Copyright 12.05.2009 by Bochkanov Sergey *************************************************************************/
public static void gqgenerategausshermite( int n, out int info, out double[] x, out double[] w)
/************************************************************************* Returns nodes/weights for Gauss-Jacobi quadrature on [-1,1] with weight function W(x)=Power(1-x,Alpha)*Power(1+x,Beta). INPUT PARAMETERS: N - number of nodes, >=1 Alpha - power-law coefficient, Alpha>-1 Beta - power-law coefficient, Beta>-1 OUTPUT PARAMETERS: Info - error code: * -4 an error was detected when calculating weights/nodes. Alpha or Beta are too close to -1 to obtain weights/nodes with high enough accuracy, or, may be, N is too large. Try to use multiple precision version. * -3 internal eigenproblem solver hasn't converged * -1 incorrect N/Alpha/Beta was passed * +1 OK X - array[0..N-1] - array of quadrature nodes, in ascending order. W - array[0..N-1] - array of quadrature weights. -- ALGLIB -- Copyright 12.05.2009 by Bochkanov Sergey *************************************************************************/
public static void gqgenerategaussjacobi( int n, double alpha, double beta, out int info, out double[] x, out double[] w)
/************************************************************************* Returns nodes/weights for Gauss-Laguerre quadrature on [0,+inf) with weight function W(x)=Power(x,Alpha)*Exp(-x) INPUT PARAMETERS: N - number of nodes, >=1 Alpha - power-law coefficient, Alpha>-1 OUTPUT PARAMETERS: Info - error code: * -4 an error was detected when calculating weights/nodes. Alpha is too close to -1 to obtain weights/nodes with high enough accuracy or, may be, N is too large. Try to use multiple precision version. * -3 internal eigenproblem solver hasn't converged * -1 incorrect N/Alpha was passed * +1 OK X - array[0..N-1] - array of quadrature nodes, in ascending order. W - array[0..N-1] - array of quadrature weights. -- ALGLIB -- Copyright 12.05.2009 by Bochkanov Sergey *************************************************************************/
public static void gqgenerategausslaguerre( int n, double alpha, out int info, out double[] x, out double[] w)
/************************************************************************* Returns nodes/weights for Gauss-Legendre quadrature on [-1,1] with N nodes. INPUT PARAMETERS: N - number of nodes, >=1 OUTPUT PARAMETERS: Info - error code: * -4 an error was detected when calculating weights/nodes. N is too large to obtain weights/nodes with high enough accuracy. Try to use multiple precision version. * -3 internal eigenproblem solver hasn't converged * -1 incorrect N was passed * +1 OK X - array[0..N-1] - array of quadrature nodes, in ascending order. W - array[0..N-1] - array of quadrature weights. -- ALGLIB -- Copyright 12.05.2009 by Bochkanov Sergey *************************************************************************/
public static void gqgenerategausslegendre( int n, out int info, out double[] x, out double[] w)
/************************************************************************* Computation of nodes and weights for a Gauss-Lobatto quadrature formula The algorithm generates the N-point Gauss-Lobatto quadrature formula with weight function given by coefficients alpha and beta of a recurrence which generates a system of orthogonal polynomials. P-1(x) = 0 P0(x) = 1 Pn+1(x) = (x-alpha(n))*Pn(x) - beta(n)*Pn-1(x) and zeroth moment Mu0 Mu0 = integral(W(x)dx,a,b) INPUT PARAMETERS: Alpha – array[0..N-2], alpha coefficients Beta – array[0..N-2], beta coefficients. Zero-indexed element is not used, may be arbitrary. Beta[I]>0 Mu0 – zeroth moment of the weighting function. A – left boundary of the integration interval. B – right boundary of the integration interval. N – number of nodes of the quadrature formula, N>=3 (including the left and right boundary nodes). OUTPUT PARAMETERS: Info - error code: * -3 internal eigenproblem solver hasn't converged * -2 Beta[i]<=0 * -1 incorrect N was passed * 1 OK X - array[0..N-1] - array of quadrature nodes, in ascending order. W - array[0..N-1] - array of quadrature weights. -- ALGLIB -- Copyright 2005-2009 by Bochkanov Sergey *************************************************************************/
public static void gqgenerategausslobattorec( double[] alpha, double[] beta, double mu0, double a, double b, int n, out int info, out double[] x, out double[] w)
/************************************************************************* Computation of nodes and weights for a Gauss-Radau quadrature formula The algorithm generates the N-point Gauss-Radau quadrature formula with weight function given by the coefficients alpha and beta of a recurrence which generates a system of orthogonal polynomials. P-1(x) = 0 P0(x) = 1 Pn+1(x) = (x-alpha(n))*Pn(x) - beta(n)*Pn-1(x) and zeroth moment Mu0 Mu0 = integral(W(x)dx,a,b) INPUT PARAMETERS: Alpha – array[0..N-2], alpha coefficients. Beta – array[0..N-1], beta coefficients Zero-indexed element is not used. Beta[I]>0 Mu0 – zeroth moment of the weighting function. A – left boundary of the integration interval. N – number of nodes of the quadrature formula, N>=2 (including the left boundary node). OUTPUT PARAMETERS: Info - error code: * -3 internal eigenproblem solver hasn't converged * -2 Beta[i]<=0 * -1 incorrect N was passed * 1 OK X - array[0..N-1] - array of quadrature nodes, in ascending order. W - array[0..N-1] - array of quadrature weights. -- ALGLIB -- Copyright 2005-2009 by Bochkanov Sergey *************************************************************************/
public static void gqgenerategaussradaurec( double[] alpha, double[] beta, double mu0, double a, int n, out int info, out double[] x, out double[] w)
/************************************************************************* Computation of nodes and weights for a Gauss quadrature formula The algorithm generates the N-point Gauss quadrature formula with weight function given by coefficients alpha and beta of a recurrence relation which generates a system of orthogonal polynomials: P-1(x) = 0 P0(x) = 1 Pn+1(x) = (x-alpha(n))*Pn(x) - beta(n)*Pn-1(x) and zeroth moment Mu0 Mu0 = integral(W(x)dx,a,b) INPUT PARAMETERS: Alpha – array[0..N-1], alpha coefficients Beta – array[0..N-1], beta coefficients Zero-indexed element is not used and may be arbitrary. Beta[I]>0. Mu0 – zeroth moment of the weight function. N – number of nodes of the quadrature formula, N>=1 OUTPUT PARAMETERS: Info - error code: * -3 internal eigenproblem solver hasn't converged * -2 Beta[i]<=0 * -1 incorrect N was passed * 1 OK X - array[0..N-1] - array of quadrature nodes, in ascending order. W - array[0..N-1] - array of quadrature weights. -- ALGLIB -- Copyright 2005-2009 by Bochkanov Sergey *************************************************************************/
public static void gqgeneraterec( double[] alpha, double[] beta, double mu0, int n, out int info, out double[] x, out double[] w)
hermitecalculate
hermitecoefficients
hermitesum
/************************************************************************* Calculation of the value of the Hermite polynomial. Parameters: n - degree, n>=0 x - argument Result: the value of the Hermite polynomial Hn at x *************************************************************************/
public static double hermitecalculate(int n, double x)
/************************************************************************* Representation of Hn as C[0] + C[1]*X + ... + C[N]*X^N Input parameters: N - polynomial degree, n>=0 Output parameters: C - coefficients *************************************************************************/
public static void hermitecoefficients(int n, out double[] c)
/************************************************************************* Summation of Hermite polynomials using Clenshaw’s recurrence formula. This routine calculates c[0]*H0(x) + c[1]*H1(x) + ... + c[N]*HN(x) Parameters: n - degree, n>=0 x - argument Result: the value of the Hermite polynomial at x *************************************************************************/
public static double hermitesum(double[] c, int n, double x)
hqrndstate
hqrndexponential
hqrndnormal
hqrndnormal2
hqrndrandomize
hqrndseed
hqrnduniformi
hqrnduniformr
hqrndunit2
/************************************************************************* Portable high quality random number generator state. Initialized with HQRNDRandomize() or HQRNDSeed(). Fields: S1, S2 - seed values V - precomputed value MagicV - 'magic' value used to determine whether State structure was correctly initialized. *************************************************************************/
public class hqrndstate { }
/************************************************************************* Random number generator: exponential distribution State structure must be initialized with HQRNDRandomize() or HQRNDSeed(). -- ALGLIB -- Copyright 11.08.2007 by Bochkanov Sergey *************************************************************************/
public static double hqrndexponential(hqrndstate state, double lambdav)
/************************************************************************* Random number generator: normal numbers This function generates one random number from normal distribution. Its performance is equal to that of HQRNDNormal2() State structure must be initialized with HQRNDRandomize() or HQRNDSeed(). -- ALGLIB -- Copyright 02.12.2009 by Bochkanov Sergey *************************************************************************/
public static double hqrndnormal(hqrndstate state)
/************************************************************************* Random number generator: normal numbers This function generates two independent random numbers from normal distribution. Its performance is equal to that of HQRNDNormal() State structure must be initialized with HQRNDRandomize() or HQRNDSeed(). -- ALGLIB -- Copyright 02.12.2009 by Bochkanov Sergey *************************************************************************/
public static void hqrndnormal2( hqrndstate state, out double x1, out double x2)
/************************************************************************* HQRNDState initialization with random values which come from standard RNG. -- ALGLIB -- Copyright 02.12.2009 by Bochkanov Sergey *************************************************************************/
public static void hqrndrandomize(out hqrndstate state)
/************************************************************************* HQRNDState initialization with seed values -- ALGLIB -- Copyright 02.12.2009 by Bochkanov Sergey *************************************************************************/
public static void hqrndseed(int s1, int s2, out hqrndstate state)
/************************************************************************* This function generates random integer number in [0, N) 1. N must be less than HQRNDMax-1. 2. State structure must be initialized with HQRNDRandomize() or HQRNDSeed() -- ALGLIB -- Copyright 02.12.2009 by Bochkanov Sergey *************************************************************************/
public static int hqrnduniformi(hqrndstate state, int n)
/************************************************************************* This function generates random real number in (0,1), not including interval boundaries State structure must be initialized with HQRNDRandomize() or HQRNDSeed(). -- ALGLIB -- Copyright 02.12.2009 by Bochkanov Sergey *************************************************************************/
public static double hqrnduniformr(hqrndstate state)
/************************************************************************* Random number generator: random X and Y such that X^2+Y^2=1 State structure must be initialized with HQRNDRandomize() or HQRNDSeed(). -- ALGLIB -- Copyright 02.12.2009 by Bochkanov Sergey *************************************************************************/
public static void hqrndunit2( hqrndstate state, out double x, out double y)
incompletebeta
invincompletebeta
/************************************************************************* Incomplete beta integral Returns incomplete beta integral of the arguments, evaluated from zero to x. The function is defined as x - - | (a+b) | | a-1 b-1 ----------- | t (1-t) dt. - - | | | (a) | (b) - 0 The domain of definition is 0 <= x <= 1. In this implementation a and b are restricted to positive values. The integral from x to 1 may be obtained by the symmetry relation 1 - incbet( a, b, x ) = incbet( b, a, 1-x ). The integral is evaluated by a continued fraction expansion or, when b*x is small, by a power series. ACCURACY: Tested at uniformly distributed random points (a,b,x) with a and b in "domain" and x between 0 and 1. Relative error arithmetic domain # trials peak rms IEEE 0,5 10000 6.9e-15 4.5e-16 IEEE 0,85 250000 2.2e-13 1.7e-14 IEEE 0,1000 30000 5.3e-12 6.3e-13 IEEE 0,10000 250000 9.3e-11 7.1e-12 IEEE 0,100000 10000 8.7e-10 4.8e-11 Outputs smaller than the IEEE gradual underflow threshold were excluded from these statistics. Cephes Math Library, Release 2.8: June, 2000 Copyright 1984, 1995, 2000 by Stephen L. Moshier *************************************************************************/
public static double incompletebeta(double a, double b, double x)
/************************************************************************* Inverse of imcomplete beta integral Given y, the function finds x such that incbet( a, b, x ) = y . The routine performs interval halving or Newton iterations to find the root of incbet(a,b,x) - y = 0. ACCURACY: Relative error: x a,b arithmetic domain domain # trials peak rms IEEE 0,1 .5,10000 50000 5.8e-12 1.3e-13 IEEE 0,1 .25,100 100000 1.8e-13 3.9e-15 IEEE 0,1 0,5 50000 1.1e-12 5.5e-15 With a and b constrained to half-integer or integer values: IEEE 0,1 .5,10000 50000 5.8e-12 1.1e-13 IEEE 0,1 .5,100 100000 1.7e-14 7.9e-16 With a = .5, b constrained to half-integer or integer values: IEEE 0,1 .5,10000 10000 8.3e-11 1.0e-11 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1996, 2000 by Stephen L. Moshier *************************************************************************/
public static double invincompletebeta(double a, double b, double y)
idwinterpolant
idwbuildmodifiedshepard
idwbuildmodifiedshepardr
idwbuildnoisy
idwcalc
/************************************************************************* IDW interpolant. *************************************************************************/
public class idwinterpolant { }
/************************************************************************* IDW interpolant using modified Shepard method for uniform point distributions. INPUT PARAMETERS: XY - X and Y values, array[0..N-1,0..NX]. First NX columns contain X-values, last column contain Y-values. N - number of nodes, N>0. NX - space dimension, NX>=1. D - nodal function type, either: * 0 constant model. Just for demonstration only, worst model ever. * 1 linear model, least squares fitting. Simpe model for datasets too small for quadratic models * 2 quadratic model, least squares fitting. Best model available (if your dataset is large enough). * -1 "fast" linear model, use with caution!!! It is significantly faster than linear/quadratic and better than constant model. But it is less robust (especially in the presence of noise). NQ - number of points used to calculate nodal functions (ignored for constant models). NQ should be LARGER than: * max(1.5*(1+NX),2^NX+1) for linear model, * max(3/4*(NX+2)*(NX+1),2^NX+1) for quadratic model. Values less than this threshold will be silently increased. NW - number of points used to calculate weights and to interpolate. Required: >=2^NX+1, values less than this threshold will be silently increased. Recommended value: about 2*NQ OUTPUT PARAMETERS: Z - IDW interpolant. NOTES: * best results are obtained with quadratic models, worst - with constant models * when N is large, NQ and NW must be significantly smaller than N both to obtain optimal performance and to obtain optimal accuracy. In 2 or 3-dimensional tasks NQ=15 and NW=25 are good values to start with. * NQ and NW may be greater than N. In such cases they will be automatically decreased. * this subroutine is always succeeds (as long as correct parameters are passed). * see 'Multivariate Interpolation of Large Sets of Scattered Data' by Robert J. Renka for more information on this algorithm. * this subroutine assumes that point distribution is uniform at the small scales. If it isn't - for example, points are concentrated along "lines", but "lines" distribution is uniform at the larger scale - then you should use IDWBuildModifiedShepardR() -- ALGLIB PROJECT -- Copyright 02.03.2010 by Bochkanov Sergey *************************************************************************/
public static void idwbuildmodifiedshepard( double[,] xy, int n, int nx, int d, int nq, int nw, out idwinterpolant z)
/************************************************************************* IDW interpolant using modified Shepard method for non-uniform datasets. This type of model uses constant nodal functions and interpolates using all nodes which are closer than user-specified radius R. It may be used when points distribution is non-uniform at the small scale, but it is at the distances as large as R. INPUT PARAMETERS: XY - X and Y values, array[0..N-1,0..NX]. First NX columns contain X-values, last column contain Y-values. N - number of nodes, N>0. NX - space dimension, NX>=1. R - radius, R>0 OUTPUT PARAMETERS: Z - IDW interpolant. NOTES: * if there is less than IDWKMin points within R-ball, algorithm selects IDWKMin closest ones, so that continuity properties of interpolant are preserved even far from points. -- ALGLIB PROJECT -- Copyright 11.04.2010 by Bochkanov Sergey *************************************************************************/
public static void idwbuildmodifiedshepardr( double[,] xy, int n, int nx, double r, out idwinterpolant z)
/************************************************************************* IDW model for noisy data. This subroutine may be used to handle noisy data, i.e. data with noise in OUTPUT values. It differs from IDWBuildModifiedShepard() in the following aspects: * nodal functions are not constrained to pass through nodes: Qi(xi)<>yi, i.e. we have fitting instead of interpolation. * weights which are used during least squares fitting stage are all equal to 1.0 (independently of distance) * "fast"-linear or constant nodal functions are not supported (either not robust enough or too rigid) This problem require far more complex tuning than interpolation problems. Below you can find some recommendations regarding this problem: * focus on tuning NQ; it controls noise reduction. As for NW, you can just make it equal to 2*NQ. * you can use cross-validation to determine optimal NQ. * optimal NQ is a result of complex tradeoff between noise level (more noise = larger NQ required) and underlying function complexity (given fixed N, larger NQ means smoothing of compex features in the data). For example, NQ=N will reduce noise to the minimum level possible, but you will end up with just constant/linear/quadratic (depending on D) least squares model for the whole dataset. INPUT PARAMETERS: XY - X and Y values, array[0..N-1,0..NX]. First NX columns contain X-values, last column contain Y-values. N - number of nodes, N>0. NX - space dimension, NX>=1. D - nodal function degree, either: * 1 linear model, least squares fitting. Simpe model for datasets too small for quadratic models (or for very noisy problems). * 2 quadratic model, least squares fitting. Best model available (if your dataset is large enough). NQ - number of points used to calculate nodal functions. NQ should be significantly larger than 1.5 times the number of coefficients in a nodal function to overcome effects of noise: * larger than 1.5*(1+NX) for linear model, * larger than 3/4*(NX+2)*(NX+1) for quadratic model. Values less than this threshold will be silently increased. NW - number of points used to calculate weights and to interpolate. Required: >=2^NX+1, values less than this threshold will be silently increased. Recommended value: about 2*NQ or larger OUTPUT PARAMETERS: Z - IDW interpolant. NOTES: * best results are obtained with quadratic models, linear models are not recommended to use unless you are pretty sure that it is what you want * this subroutine is always succeeds (as long as correct parameters are passed). * see 'Multivariate Interpolation of Large Sets of Scattered Data' by Robert J. Renka for more information on this algorithm. -- ALGLIB PROJECT -- Copyright 02.03.2010 by Bochkanov Sergey *************************************************************************/
public static void idwbuildnoisy( double[,] xy, int n, int nx, int d, int nq, int nw, out idwinterpolant z)
/************************************************************************* IDW interpolation INPUT PARAMETERS: Z - IDW interpolant built with one of model building subroutines. X - array[0..NX-1], interpolation point Result: IDW interpolant Z(X) -- ALGLIB -- Copyright 02.03.2010 by Bochkanov Sergey *************************************************************************/
public static double idwcalc(idwinterpolant z, double[] x)
incompletegamma
incompletegammac
invincompletegammac
/************************************************************************* Incomplete gamma integral The function is defined by x - 1 | | -t a-1 igam(a,x) = ----- | e t dt. - | | | (a) - 0 In this implementation both arguments must be positive. The integral is evaluated by either a power series or continued fraction expansion, depending on the relative values of a and x. ACCURACY: Relative error: arithmetic domain # trials peak rms IEEE 0,30 200000 3.6e-14 2.9e-15 IEEE 0,100 300000 9.9e-14 1.5e-14 Cephes Math Library Release 2.8: June, 2000 Copyright 1985, 1987, 2000 by Stephen L. Moshier *************************************************************************/
public static double incompletegamma(double a, double x)
/************************************************************************* Complemented incomplete gamma integral The function is defined by igamc(a,x) = 1 - igam(a,x) inf. - 1 | | -t a-1 = ----- | e t dt. - | | | (a) - x In this implementation both arguments must be positive. The integral is evaluated by either a power series or continued fraction expansion, depending on the relative values of a and x. ACCURACY: Tested at random a, x. a x Relative error: arithmetic domain domain # trials peak rms IEEE 0.5,100 0,100 200000 1.9e-14 1.7e-15 IEEE 0.01,0.5 0,100 200000 1.4e-13 1.6e-15 Cephes Math Library Release 2.8: June, 2000 Copyright 1985, 1987, 2000 by Stephen L. Moshier *************************************************************************/
public static double incompletegammac(double a, double x)
/************************************************************************* Inverse of complemented imcomplete gamma integral Given p, the function finds x such that igamc( a, x ) = p. Starting with the approximate value 3 x = a t where t = 1 - d - ndtri(p) sqrt(d) and d = 1/9a, the routine performs up to 10 Newton iterations to find the root of igamc(a,x) - p = 0. ACCURACY: Tested at random a, p in the intervals indicated. a p Relative error: arithmetic domain domain # trials peak rms IEEE 0.5,100 0,0.5 100000 1.0e-14 1.7e-15 IEEE 0.01,0.5 0,0.5 100000 9.0e-14 3.4e-15 IEEE 0.5,10000 0,0.5 20000 2.3e-13 3.8e-14 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1995, 2000 by Stephen L. Moshier *************************************************************************/
public static double invincompletegammac(double a, double y0)
rmatrixinvupdatecolumn
rmatrixinvupdaterow
rmatrixinvupdatesimple
rmatrixinvupdateuv
/************************************************************************* Inverse matrix update by the Sherman-Morrison formula The algorithm updates matrix A^-1 when adding a vector to a column of matrix A. Input parameters: InvA - inverse of matrix A. Array whose indexes range within [0..N-1, 0..N-1]. N - size of matrix A. UpdColumn - the column of A whose vector U was added. 0 <= UpdColumn <= N-1 U - the vector to be added to a column. Array whose index ranges within [0..N-1]. Output parameters: InvA - inverse of modified matrix A. -- ALGLIB -- Copyright 2005 by Bochkanov Sergey *************************************************************************/
public static void rmatrixinvupdatecolumn( ref double[,] inva, int n, int updcolumn, double[] u)
/************************************************************************* Inverse matrix update by the Sherman-Morrison formula The algorithm updates matrix A^-1 when adding a vector to a row of matrix A. Input parameters: InvA - inverse of matrix A. Array whose indexes range within [0..N-1, 0..N-1]. N - size of matrix A. UpdRow - the row of A whose vector V was added. 0 <= Row <= N-1 V - the vector to be added to a row. Array whose index ranges within [0..N-1]. Output parameters: InvA - inverse of modified matrix A. -- ALGLIB -- Copyright 2005 by Bochkanov Sergey *************************************************************************/
public static void rmatrixinvupdaterow( ref double[,] inva, int n, int updrow, double[] v)
/************************************************************************* Inverse matrix update by the Sherman-Morrison formula The algorithm updates matrix A^-1 when adding a number to an element of matrix A. Input parameters: InvA - inverse of matrix A. Array whose indexes range within [0..N-1, 0..N-1]. N - size of matrix A. UpdRow - row where the element to be updated is stored. UpdColumn - column where the element to be updated is stored. UpdVal - a number to be added to the element. Output parameters: InvA - inverse of modified matrix A. -- ALGLIB -- Copyright 2005 by Bochkanov Sergey *************************************************************************/
public static void rmatrixinvupdatesimple( ref double[,] inva, int n, int updrow, int updcolumn, double updval)
/************************************************************************* Inverse matrix update by the Sherman-Morrison formula The algorithm computes the inverse of matrix A+u*v’ by using the given matrix A^-1 and the vectors u and v. Input parameters: InvA - inverse of matrix A. Array whose indexes range within [0..N-1, 0..N-1]. N - size of matrix A. U - the vector modifying the matrix. Array whose index ranges within [0..N-1]. V - the vector modifying the matrix. Array whose index ranges within [0..N-1]. Output parameters: InvA - inverse of matrix A + u*v'. -- ALGLIB -- Copyright 2005 by Bochkanov Sergey *************************************************************************/
public static void rmatrixinvupdateuv( ref double[,] inva, int n, double[] u, double[] v)
jacobianellipticfunctions
/************************************************************************* Jacobian Elliptic Functions Evaluates the Jacobian elliptic functions sn(u|m), cn(u|m), and dn(u|m) of parameter m between 0 and 1, and real argument u. These functions are periodic, with quarter-period on the real axis equal to the complete elliptic integral ellpk(1.0-m). Relation to incomplete elliptic integral: If u = ellik(phi,m), then sn(u|m) = sin(phi), and cn(u|m) = cos(phi). Phi is called the amplitude of u. Computation is by means of the arithmetic-geometric mean algorithm, except when m is within 1e-9 of 0 or 1. In the latter case with m close to 1, the approximation applies only for phi < pi/2. ACCURACY: Tested at random points with u between 0 and 10, m between 0 and 1. Absolute error (* = relative error): arithmetic function # trials peak rms IEEE phi 10000 9.2e-16* 1.4e-16* IEEE sn 50000 4.1e-15 4.6e-16 IEEE cn 40000 3.6e-15 4.4e-16 IEEE dn 10000 1.3e-12 1.8e-14 Peak error observed in consistency check using addition theorem for sn(u+v) was 4e-16 (absolute). Also tested by the above relation to the incomplete elliptic integral. Accuracy deteriorates when u is large. Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 2000 by Stephen L. Moshier *************************************************************************/
public static void jacobianellipticfunctions( double u, double m, out double sn, out double cn, out double dn, out double ph)
jarqueberatest
/************************************************************************* Jarque-Bera test This test checks hypotheses about the fact that a given sample X is a sample of normal random variable. Requirements: * the number of elements in the sample is not less than 5. Input parameters: X - sample. Array whose index goes from 0 to N-1. N - size of the sample. N>=5 Output parameters: BothTails - p-value for two-tailed test. If BothTails is less than the given significance level the null hypothesis is rejected. LeftTail - p-value for left-tailed test. If LeftTail is less than the given significance level, the null hypothesis is rejected. RightTail - p-value for right-tailed test. If RightTail is less than the given significance level the null hypothesis is rejected. Accuracy of the approximation used (5<=N<=1951): p-value relative error (5<=N<=1951) [1, 0.1] < 1% [0.1, 0.01] < 2% [0.01, 0.001] < 6% [0.001, 0] wasn't measured For N>1951 accuracy wasn't measured but it shouldn't be sharply different from table values. -- ALGLIB -- Copyright 09.04.2007 by Bochkanov Sergey *************************************************************************/
public static void jarqueberatest(double[] x, int n, out double p)
kmeansgenerate
/************************************************************************* k-means++ clusterization INPUT PARAMETERS: XY - dataset, array [0..NPoints-1,0..NVars-1]. NPoints - dataset size, NPoints>=K NVars - number of variables, NVars>=1 K - desired number of clusters, K>=1 Restarts - number of restarts, Restarts>=1 OUTPUT PARAMETERS: Info - return code: * -3, if task is degenerate (number of distinct points is less than K) * -1, if incorrect NPoints/NFeatures/K/Restarts was passed * 1, if subroutine finished successfully C - array[0..NVars-1,0..K-1].matrix whose columns store cluster's centers XYC - array[NPoints], which contains cluster indexes -- ALGLIB -- Copyright 21.03.2009 by Bochkanov Sergey *************************************************************************/
public static void kmeansgenerate( double[,] xy, int npoints, int nvars, int k, int restarts, out int info, out double[,] c, out int[] xyc)
laguerrecalculate
laguerrecoefficients
laguerresum
/************************************************************************* Calculation of the value of the Laguerre polynomial. Parameters: n - degree, n>=0 x - argument Result: the value of the Laguerre polynomial Ln at x *************************************************************************/
public static double laguerrecalculate(int n, double x)
/************************************************************************* Representation of Ln as C[0] + C[1]*X + ... + C[N]*X^N Input parameters: N - polynomial degree, n>=0 Output parameters: C - coefficients *************************************************************************/
public static void laguerrecoefficients(int n, out double[] c)
/************************************************************************* Summation of Laguerre polynomials using Clenshaw’s recurrence formula. This routine calculates c[0]*L0(x) + c[1]*L1(x) + ... + c[N]*LN(x) Parameters: n - degree, n>=0 x - argument Result: the value of the Laguerre polynomial at x *************************************************************************/
public static double laguerresum(double[] c, int n, double x)
fisherlda
fisherldan
/************************************************************************* Multiclass Fisher LDA Subroutine finds coefficients of linear combination which optimally separates training set on classes. INPUT PARAMETERS: XY - training set, array[0..NPoints-1,0..NVars]. First NVars columns store values of independent variables, next column stores number of class (from 0 to NClasses-1) which dataset element belongs to. Fractional values are rounded to nearest integer. NPoints - training set size, NPoints>=0 NVars - number of independent variables, NVars>=1 NClasses - number of classes, NClasses>=2 OUTPUT PARAMETERS: Info - return code: * -4, if internal EVD subroutine hasn't converged * -2, if there is a point with class number outside of [0..NClasses-1]. * -1, if incorrect parameters was passed (NPoints<0, NVars<1, NClasses<2) * 1, if task has been solved * 2, if there was a multicollinearity in training set, but task has been solved. W - linear combination coefficients, array[0..NVars-1] -- ALGLIB -- Copyright 31.05.2008 by Bochkanov Sergey *************************************************************************/
public static void fisherlda( double[,] xy, int npoints, int nvars, int nclasses, out int info, out double[] w)
/************************************************************************* N-dimensional multiclass Fisher LDA Subroutine finds coefficients of linear combinations which optimally separates training set on classes. It returns N-dimensional basis whose vector are sorted by quality of training set separation (in descending order). INPUT PARAMETERS: XY - training set, array[0..NPoints-1,0..NVars]. First NVars columns store values of independent variables, next column stores number of class (from 0 to NClasses-1) which dataset element belongs to. Fractional values are rounded to nearest integer. NPoints - training set size, NPoints>=0 NVars - number of independent variables, NVars>=1 NClasses - number of classes, NClasses>=2 OUTPUT PARAMETERS: Info - return code: * -4, if internal EVD subroutine hasn't converged * -2, if there is a point with class number outside of [0..NClasses-1]. * -1, if incorrect parameters was passed (NPoints<0, NVars<1, NClasses<2) * 1, if task has been solved * 2, if there was a multicollinearity in training set, but task has been solved. W - basis, array[0..NVars-1,0..NVars-1] columns of matrix stores basis vectors, sorted by quality of training set separation (in descending order) -- ALGLIB -- Copyright 31.05.2008 by Bochkanov Sergey *************************************************************************/
public static void fisherldan( double[,] xy, int npoints, int nvars, int nclasses, out int info, out double[,] w)
legendrecalculate
legendrecoefficients
legendresum
/************************************************************************* Calculation of the value of the Legendre polynomial Pn. Parameters: n - degree, n>=0 x - argument Result: the value of the Legendre polynomial Pn at x *************************************************************************/
public static double legendrecalculate(int n, double x)
/************************************************************************* Representation of Pn as C[0] + C[1]*X + ... + C[N]*X^N Input parameters: N - polynomial degree, n>=0 Output parameters: C - coefficients *************************************************************************/
public static void legendrecoefficients(int n, out double[] c)
/************************************************************************* Summation of Legendre polynomials using Clenshaw’s recurrence formula. This routine calculates c[0]*P0(x) + c[1]*P1(x) + ... + c[N]*PN(x) Parameters: n - degree, n>=0 x - argument Result: the value of the Legendre polynomial at x *************************************************************************/
public static double legendresum(double[] c, int n, double x)
linearmodel
lrreport
lravgerror
lravgrelerror
lrbuild
lrbuilds
lrbuildz
lrbuildzs
lrpack
lrprocess
lrrmserror
lrunpack
/************************************************************************* *************************************************************************/
public class linearmodel { }
/************************************************************************* LRReport structure contains additional information about linear model: * C - covariation matrix, array[0..NVars,0..NVars]. C[i,j] = Cov(A[i],A[j]) * RMSError - root mean square error on a training set * AvgError - average error on a training set * AvgRelError - average relative error on a training set (excluding observations with zero function value). * CVRMSError - leave-one-out cross-validation estimate of generalization error. Calculated using fast algorithm with O(NVars*NPoints) complexity. * CVAvgError - cross-validation estimate of average error * CVAvgRelError - cross-validation estimate of average relative error All other fields of the structure are intended for internal use and should not be used outside ALGLIB. *************************************************************************/
public class lrreport { public double[,] c; public double rmserror; public double avgerror; public double avgrelerror; public double cvrmserror; public double cvavgerror; public double cvavgrelerror; public int ncvdefects; public int[] cvdefects; }
/************************************************************************* Average error on the test set INPUT PARAMETERS: LM - linear model XY - test set NPoints - test set size RESULT: average error. -- ALGLIB -- Copyright 30.08.2008 by Bochkanov Sergey *************************************************************************/
public static double lravgerror(linearmodel lm, double[,] xy, int npoints)
/************************************************************************* RMS error on the test set INPUT PARAMETERS: LM - linear model XY - test set NPoints - test set size RESULT: average relative error. -- ALGLIB -- Copyright 30.08.2008 by Bochkanov Sergey *************************************************************************/
public static double lravgrelerror( linearmodel lm, double[,] xy, int npoints)
/************************************************************************* Linear regression Subroutine builds model: Y = A(0)*X[0] + ... + A(N-1)*X[N-1] + A(N) and model found in ALGLIB format, covariation matrix, training set errors (rms, average, average relative) and leave-one-out cross-validation estimate of the generalization error. CV estimate calculated using fast algorithm with O(NPoints*NVars) complexity. When covariation matrix is calculated standard deviations of function values are assumed to be equal to RMS error on the training set. INPUT PARAMETERS: XY - training set, array [0..NPoints-1,0..NVars]: * NVars columns - independent variables * last column - dependent variable NPoints - training set size, NPoints>NVars+1 NVars - number of independent variables OUTPUT PARAMETERS: Info - return code: * -255, in case of unknown internal error * -4, if internal SVD subroutine haven't converged * -1, if incorrect parameters was passed (NPoints<NVars+2, NVars<1). * 1, if subroutine successfully finished LM - linear model in the ALGLIB format. Use subroutines of this unit to work with the model. AR - additional results -- ALGLIB -- Copyright 02.08.2008 by Bochkanov Sergey *************************************************************************/
public static void lrbuild( double[,] xy, int npoints, int nvars, out int info, out linearmodel lm, out lrreport ar)
/************************************************************************* Linear regression Variant of LRBuild which uses vector of standatd deviations (errors in function values). INPUT PARAMETERS: XY - training set, array [0..NPoints-1,0..NVars]: * NVars columns - independent variables * last column - dependent variable S - standard deviations (errors in function values) array[0..NPoints-1], S[i]>0. NPoints - training set size, NPoints>NVars+1 NVars - number of independent variables OUTPUT PARAMETERS: Info - return code: * -255, in case of unknown internal error * -4, if internal SVD subroutine haven't converged * -1, if incorrect parameters was passed (NPoints<NVars+2, NVars<1). * -2, if S[I]<=0 * 1, if subroutine successfully finished LM - linear model in the ALGLIB format. Use subroutines of this unit to work with the model. AR - additional results -- ALGLIB -- Copyright 02.08.2008 by Bochkanov Sergey *************************************************************************/
public static void lrbuilds( double[,] xy, double[] s, int npoints, int nvars, out int info, out linearmodel lm, out lrreport ar)
/************************************************************************* Like LRBuild but builds model Y = A(0)*X[0] + ... + A(N-1)*X[N-1] i.e. with zero constant term. -- ALGLIB -- Copyright 30.10.2008 by Bochkanov Sergey *************************************************************************/
public static void lrbuildz( double[,] xy, int npoints, int nvars, out int info, out linearmodel lm, out lrreport ar)
/************************************************************************* Like LRBuildS, but builds model Y = A(0)*X[0] + ... + A(N-1)*X[N-1] i.e. with zero constant term. -- ALGLIB -- Copyright 30.10.2008 by Bochkanov Sergey *************************************************************************/
public static void lrbuildzs( double[,] xy, double[] s, int npoints, int nvars, out int info, out linearmodel lm, out lrreport ar)
/************************************************************************* "Packs" coefficients and creates linear model in ALGLIB format (LRUnpack reversed). INPUT PARAMETERS: V - coefficients, array[0..NVars] NVars - number of independent variables OUTPUT PAREMETERS: LM - linear model. -- ALGLIB -- Copyright 30.08.2008 by Bochkanov Sergey *************************************************************************/
public static void lrpack(double[] v, int nvars, out linearmodel lm)
/************************************************************************* Procesing INPUT PARAMETERS: LM - linear model X - input vector, array[0..NVars-1]. Result: value of linear model regression estimate -- ALGLIB -- Copyright 03.09.2008 by Bochkanov Sergey *************************************************************************/
public static double lrprocess(linearmodel lm, double[] x)
/************************************************************************* RMS error on the test set INPUT PARAMETERS: LM - linear model XY - test set NPoints - test set size RESULT: root mean square error. -- ALGLIB -- Copyright 30.08.2008 by Bochkanov Sergey *************************************************************************/
public static double lrrmserror(linearmodel lm, double[,] xy, int npoints)
/************************************************************************* Unpacks coefficients of linear model. INPUT PARAMETERS: LM - linear model in ALGLIB format OUTPUT PARAMETERS: V - coefficients, array[0..NVars] constant term (intercept) is stored in the V[NVars]. NVars - number of independent variables (one less than number of coefficients) -- ALGLIB -- Copyright 30.08.2008 by Bochkanov Sergey *************************************************************************/
public static void lrunpack(linearmodel lm, out double[] v, out int nvars)
logitmodel
mnlreport
mnlavgce
mnlavgerror
mnlavgrelerror
mnlclserror
mnlpack
mnlprocess
mnlprocessi
mnlrelclserror
mnlrmserror
mnltrainh
mnlunpack
/************************************************************************* *************************************************************************/
public class logitmodel { }
/************************************************************************* MNLReport structure contains information about training process: * NGrad - number of gradient calculations * NHess - number of Hessian calculations *************************************************************************/
public class mnlreport { public int ngrad; public int nhess; }
/************************************************************************* Average cross-entropy (in bits per element) on the test set INPUT PARAMETERS: LM - logit model XY - test set NPoints - test set size RESULT: CrossEntropy/(NPoints*ln(2)). -- ALGLIB -- Copyright 10.09.2008 by Bochkanov Sergey *************************************************************************/
public static double mnlavgce(logitmodel lm, double[,] xy, int npoints)
/************************************************************************* Average error on the test set INPUT PARAMETERS: LM - logit model XY - test set NPoints - test set size RESULT: average error (error when estimating posterior probabilities). -- ALGLIB -- Copyright 30.08.2008 by Bochkanov Sergey *************************************************************************/
public static double mnlavgerror(logitmodel lm, double[,] xy, int npoints)
/************************************************************************* Average relative error on the test set INPUT PARAMETERS: LM - logit model XY - test set NPoints - test set size RESULT: average relative error (error when estimating posterior probabilities). -- ALGLIB -- Copyright 30.08.2008 by Bochkanov Sergey *************************************************************************/
public static double mnlavgrelerror( logitmodel lm, double[,] xy, int ssize)
/************************************************************************* Classification error on test set = MNLRelClsError*NPoints -- ALGLIB -- Copyright 10.09.2008 by Bochkanov Sergey *************************************************************************/
public static int mnlclserror(logitmodel lm, double[,] xy, int npoints)
/************************************************************************* "Packs" coefficients and creates logit model in ALGLIB format (MNLUnpack reversed). INPUT PARAMETERS: A - model (see MNLUnpack) NVars - number of independent variables NClasses - number of classes OUTPUT PARAMETERS: LM - logit model. -- ALGLIB -- Copyright 10.09.2008 by Bochkanov Sergey *************************************************************************/
public static void mnlpack( double[,] a, int nvars, int nclasses, out logitmodel lm)
/************************************************************************* Procesing INPUT PARAMETERS: LM - logit model, passed by non-constant reference (some fields of structure are used as temporaries when calculating model output). X - input vector, array[0..NVars-1]. Y - (possibly) preallocated buffer; if size of Y is less than NClasses, it will be reallocated.If it is large enough, it is NOT reallocated, so we can save some time on reallocation. OUTPUT PARAMETERS: Y - result, array[0..NClasses-1] Vector of posterior probabilities for classification task. -- ALGLIB -- Copyright 10.09.2008 by Bochkanov Sergey *************************************************************************/
public static void mnlprocess(logitmodel lm, double[] x, ref double[] y)
/************************************************************************* 'interactive' variant of MNLProcess for languages like Python which support constructs like "Y = MNLProcess(LM,X)" and interactive mode of the interpreter This function allocates new array on each call, so it is significantly slower than its 'non-interactive' counterpart, but it is more convenient when you call it from command line. -- ALGLIB -- Copyright 10.09.2008 by Bochkanov Sergey *************************************************************************/
public static void mnlprocessi(logitmodel lm, double[] x, out double[] y)
/************************************************************************* Relative classification error on the test set INPUT PARAMETERS: LM - logit model XY - test set NPoints - test set size RESULT: percent of incorrectly classified cases. -- ALGLIB -- Copyright 10.09.2008 by Bochkanov Sergey *************************************************************************/
public static double mnlrelclserror( logitmodel lm, double[,] xy, int npoints)
/************************************************************************* RMS error on the test set INPUT PARAMETERS: LM - logit model XY - test set NPoints - test set size RESULT: root mean square error (error when estimating posterior probabilities). -- ALGLIB -- Copyright 30.08.2008 by Bochkanov Sergey *************************************************************************/
public static double mnlrmserror(logitmodel lm, double[,] xy, int npoints)
/************************************************************************* This subroutine trains logit model. INPUT PARAMETERS: XY - training set, array[0..NPoints-1,0..NVars] First NVars columns store values of independent variables, next column stores number of class (from 0 to NClasses-1) which dataset element belongs to. Fractional values are rounded to nearest integer. NPoints - training set size, NPoints>=1 NVars - number of independent variables, NVars>=1 NClasses - number of classes, NClasses>=2 OUTPUT PARAMETERS: Info - return code: * -2, if there is a point with class number outside of [0..NClasses-1]. * -1, if incorrect parameters was passed (NPoints<NVars+2, NVars<1, NClasses<2). * 1, if task has been solved LM - model built Rep - training report -- ALGLIB -- Copyright 10.09.2008 by Bochkanov Sergey *************************************************************************/
public static void mnltrainh( double[,] xy, int npoints, int nvars, int nclasses, out int info, out logitmodel lm, out mnlreport rep)
/************************************************************************* Unpacks coefficients of logit model. Logit model have form: P(class=i) = S(i) / (S(0) + S(1) + ... +S(M-1)) S(i) = Exp(A[i,0]*X[0] + ... + A[i,N-1]*X[N-1] + A[i,N]), when i<M-1 S(M-1) = 1 INPUT PARAMETERS: LM - logit model in ALGLIB format OUTPUT PARAMETERS: V - coefficients, array[0..NClasses-2,0..NVars] NVars - number of independent variables NClasses - number of classes -- ALGLIB -- Copyright 10.09.2008 by Bochkanov Sergey *************************************************************************/
public static void mnlunpack( logitmodel lm, out double[,] a, out int nvars, out int nclasses)
barycentricfitreport
lsfitreport
lsfitstate
polynomialfitreport
spline1dfitreport
barycentricfitfloaterhormann
barycentricfitfloaterhormannwc
lsfitcreatef
lsfitcreatefg
lsfitcreatefgh
lsfitcreatewf
lsfitcreatewfg
lsfitcreatewfgh
lsfitfit
lsfitlinear
lsfitlinearc
lsfitlinearw
lsfitlinearwc
lsfitresults
lsfitsetbc
lsfitsetcond
lsfitsetscale
lsfitsetstpmax
lsfitsetxrep
polynomialfit
polynomialfitwc
spline1dfitcubic
spline1dfitcubicwc
spline1dfithermite
spline1dfithermitewc
spline1dfitpenalized
spline1dfitpenalizedw
lsfit_d_lin Unconstrained (general) linear least squares fitting with and without weights
lsfit_d_linc Constrained (general) linear least squares fitting with and without weights
lsfit_d_nlf Nonlinear fitting using function value only
lsfit_d_nlfb Bound contstrained nonlinear fitting using function value only
lsfit_d_nlfg Nonlinear fitting using gradient
lsfit_d_nlfgh Nonlinear fitting using gradient and Hessian
lsfit_d_nlscale Nonlinear fitting with custom scaling and bound constraints
lsfit_d_pol Unconstrained polynomial fitting
lsfit_d_polc Constrained polynomial fitting
lsfit_d_spline Unconstrained fitting by penalized regression spline
/************************************************************************* Barycentric fitting report: RMSError RMS error AvgError average error AvgRelError average relative error (for non-zero Y[I]) MaxError maximum error TaskRCond reciprocal of task's condition number *************************************************************************/
public class barycentricfitreport { public double taskrcond; public int dbest; public double rmserror; public double avgerror; public double avgrelerror; public double maxerror; }
/************************************************************************* Least squares fitting report: TaskRCond reciprocal of task's condition number IterationsCount number of internal iterations RMSError RMS error AvgError average error AvgRelError average relative error (for non-zero Y[I]) MaxError maximum error WRMSError weighted RMS error *************************************************************************/
public class lsfitreport { public double taskrcond; public int iterationscount; public double rmserror; public double avgerror; public double avgrelerror; public double maxerror; public double wrmserror; }
/************************************************************************* Nonlinear fitter. You should use ALGLIB functions to work with fitter. Never try to access its fields directly! *************************************************************************/
public class lsfitstate { }
/************************************************************************* Polynomial fitting report: TaskRCond reciprocal of task's condition number RMSError RMS error AvgError average error AvgRelError average relative error (for non-zero Y[I]) MaxError maximum error *************************************************************************/
public class polynomialfitreport { public double taskrcond; public double rmserror; public double avgerror; public double avgrelerror; public double maxerror; }
/************************************************************************* Spline fitting report: RMSError RMS error AvgError average error AvgRelError average relative error (for non-zero Y[I]) MaxError maximum error Fields below are filled by obsolete functions (Spline1DFitCubic, Spline1DFitHermite). Modern fitting functions do NOT fill these fields: TaskRCond reciprocal of task's condition number *************************************************************************/
public class spline1dfitreport { public double taskrcond; public double rmserror; public double avgerror; public double avgrelerror; public double maxerror; }
/************************************************************************* Rational least squares fitting using Floater-Hormann rational functions with optimal D chosen from [0,9]. Equidistant grid with M node on [min(x),max(x)] is used to build basis functions. Different values of D are tried, optimal D (least root mean square error) is chosen. Task is linear, so linear least squares solver is used. Complexity of this computational scheme is O(N*M^2) (mostly dominated by the least squares solver). INPUT PARAMETERS: X - points, array[0..N-1]. Y - function values, array[0..N-1]. N - number of points, N>0. M - number of basis functions ( = number_of_nodes), M>=2. OUTPUT PARAMETERS: Info- same format as in LSFitLinearWC() subroutine. * Info>0 task is solved * Info<=0 an error occured: -4 means inconvergence of internal SVD -3 means inconsistent constraints B - barycentric interpolant. Rep - report, same format as in LSFitLinearWC() subroutine. Following fields are set: * DBest best value of the D parameter * RMSError rms error on the (X,Y). * AvgError average error on the (X,Y). * AvgRelError average relative error on the non-zero Y * MaxError maximum error NON-WEIGHTED ERRORS ARE CALCULATED -- ALGLIB PROJECT -- Copyright 18.08.2009 by Bochkanov Sergey *************************************************************************/
public static void barycentricfitfloaterhormann( double[] x, double[] y, int n, int m, out int info, out barycentricinterpolant b, out barycentricfitreport rep)
/************************************************************************* Weghted rational least squares fitting using Floater-Hormann rational functions with optimal D chosen from [0,9], with constraints and individual weights. Equidistant grid with M node on [min(x),max(x)] is used to build basis functions. Different values of D are tried, optimal D (least WEIGHTED root mean square error) is chosen. Task is linear, so linear least squares solver is used. Complexity of this computational scheme is O(N*M^2) (mostly dominated by the least squares solver). SEE ALSO * BarycentricFitFloaterHormann(), "lightweight" fitting without invididual weights and constraints. INPUT PARAMETERS: X - points, array[0..N-1]. Y - function values, array[0..N-1]. W - weights, array[0..N-1] Each summand in square sum of approximation deviations from given values is multiplied by the square of corresponding weight. Fill it by 1's if you don't want to solve weighted task. N - number of points, N>0. XC - points where function values/derivatives are constrained, array[0..K-1]. YC - values of constraints, array[0..K-1] DC - array[0..K-1], types of constraints: * DC[i]=0 means that S(XC[i])=YC[i] * DC[i]=1 means that S'(XC[i])=YC[i] SEE BELOW FOR IMPORTANT INFORMATION ON CONSTRAINTS K - number of constraints, 0<=K<M. K=0 means no constraints (XC/YC/DC are not used in such cases) M - number of basis functions ( = number_of_nodes), M>=2. OUTPUT PARAMETERS: Info- same format as in LSFitLinearWC() subroutine. * Info>0 task is solved * Info<=0 an error occured: -4 means inconvergence of internal SVD -3 means inconsistent constraints -1 means another errors in parameters passed (N<=0, for example) B - barycentric interpolant. Rep - report, same format as in LSFitLinearWC() subroutine. Following fields are set: * DBest best value of the D parameter * RMSError rms error on the (X,Y). * AvgError average error on the (X,Y). * AvgRelError average relative error on the non-zero Y * MaxError maximum error NON-WEIGHTED ERRORS ARE CALCULATED IMPORTANT: this subroutine doesn't calculate task's condition number for K<>0. SETTING CONSTRAINTS - DANGERS AND OPPORTUNITIES: Setting constraints can lead to undesired results, like ill-conditioned behavior, or inconsistency being detected. From the other side, it allows us to improve quality of the fit. Here we summarize our experience with constrained barycentric interpolants: * excessive constraints can be inconsistent. Floater-Hormann basis functions aren't as flexible as splines (although they are very smooth). * the more evenly constraints are spread across [min(x),max(x)], the more chances that they will be consistent * the greater is M (given fixed constraints), the more chances that constraints will be consistent * in the general case, consistency of constraints IS NOT GUARANTEED. * in the several special cases, however, we CAN guarantee consistency. * one of this cases is constraints on the function VALUES at the interval boundaries. Note that consustency of the constraints on the function DERIVATIVES is NOT guaranteed (you can use in such cases cubic splines which are more flexible). * another special case is ONE constraint on the function value (OR, but not AND, derivative) anywhere in the interval Our final recommendation is to use constraints WHEN AND ONLY WHEN you can't solve your task without them. Anything beyond special cases given above is not guaranteed and may result in inconsistency. -- ALGLIB PROJECT -- Copyright 18.08.2009 by Bochkanov Sergey *************************************************************************/
public static void barycentricfitfloaterhormannwc( double[] x, double[] y, double[] w, int n, double[] xc, double[] yc, int[] dc, int k, int m, out int info, out barycentricinterpolant b, out barycentricfitreport rep)
/************************************************************************* Nonlinear least squares fitting using function values only. Combination of numerical differentiation and secant updates is used to obtain function Jacobian. Nonlinear task min(F(c)) is solved, where F(c) = (f(c,x[0])-y[0])^2 + ... + (f(c,x[n-1])-y[n-1])^2, * N is a number of points, * M is a dimension of a space points belong to, * K is a dimension of a space of parameters being fitted, * w is an N-dimensional vector of weight coefficients, * x is a set of N points, each of them is an M-dimensional vector, * c is a K-dimensional vector of parameters being fitted This subroutine uses only f(c,x[i]). INPUT PARAMETERS: X - array[0..N-1,0..M-1], points (one row = one point) Y - array[0..N-1], function values. C - array[0..K-1], initial approximation to the solution, N - number of points, N>1 M - dimension of space K - number of parameters being fitted DiffStep- numerical differentiation step; should not be very small or large; large = loss of accuracy small = growth of round-off errors OUTPUT PARAMETERS: State - structure which stores algorithm state -- ALGLIB -- Copyright 18.10.2008 by Bochkanov Sergey *************************************************************************/
public static void lsfitcreatef( double[,] x, double[] y, double[] c, double diffstep, out lsfitstate state) public static void lsfitcreatef( double[,] x, double[] y, double[] c, int n, int m, int k, double diffstep, out lsfitstate state)

Examples:   [1]  [2]  [3]  

/************************************************************************* Nonlinear least squares fitting using gradient only, without individual weights. Nonlinear task min(F(c)) is solved, where F(c) = ((f(c,x[0])-y[0]))^2 + ... + ((f(c,x[n-1])-y[n-1]))^2, * N is a number of points, * M is a dimension of a space points belong to, * K is a dimension of a space of parameters being fitted, * x is a set of N points, each of them is an M-dimensional vector, * c is a K-dimensional vector of parameters being fitted This subroutine uses only f(c,x[i]) and its gradient. INPUT PARAMETERS: X - array[0..N-1,0..M-1], points (one row = one point) Y - array[0..N-1], function values. C - array[0..K-1], initial approximation to the solution, N - number of points, N>1 M - dimension of space K - number of parameters being fitted CheapFG - boolean flag, which is: * True if both function and gradient calculation complexity are less than O(M^2). An improved algorithm can be used which corresponds to FGJ scheme from MINLM unit. * False otherwise. Standard Jacibian-bases Levenberg-Marquardt algo will be used (FJ scheme). OUTPUT PARAMETERS: State - structure which stores algorithm state -- ALGLIB -- Copyright 17.08.2009 by Bochkanov Sergey *************************************************************************/
public static void lsfitcreatefg( double[,] x, double[] y, double[] c, bool cheapfg, out lsfitstate state) public static void lsfitcreatefg( double[,] x, double[] y, double[] c, int n, int m, int k, bool cheapfg, out lsfitstate state)

Examples:   [1]  

/************************************************************************* Nonlinear least squares fitting using gradient/Hessian, without individial weights. Nonlinear task min(F(c)) is solved, where F(c) = ((f(c,x[0])-y[0]))^2 + ... + ((f(c,x[n-1])-y[n-1]))^2, * N is a number of points, * M is a dimension of a space points belong to, * K is a dimension of a space of parameters being fitted, * x is a set of N points, each of them is an M-dimensional vector, * c is a K-dimensional vector of parameters being fitted This subroutine uses f(c,x[i]), its gradient and its Hessian. INPUT PARAMETERS: X - array[0..N-1,0..M-1], points (one row = one point) Y - array[0..N-1], function values. C - array[0..K-1], initial approximation to the solution, N - number of points, N>1 M - dimension of space K - number of parameters being fitted OUTPUT PARAMETERS: State - structure which stores algorithm state -- ALGLIB -- Copyright 17.08.2009 by Bochkanov Sergey *************************************************************************/
public static void lsfitcreatefgh( double[,] x, double[] y, double[] c, out lsfitstate state) public static void lsfitcreatefgh( double[,] x, double[] y, double[] c, int n, int m, int k, out lsfitstate state)

Examples:   [1]  

/************************************************************************* Weighted nonlinear least squares fitting using function values only. Combination of numerical differentiation and secant updates is used to obtain function Jacobian. Nonlinear task min(F(c)) is solved, where F(c) = (w[0]*(f(c,x[0])-y[0]))^2 + ... + (w[n-1]*(f(c,x[n-1])-y[n-1]))^2, * N is a number of points, * M is a dimension of a space points belong to, * K is a dimension of a space of parameters being fitted, * w is an N-dimensional vector of weight coefficients, * x is a set of N points, each of them is an M-dimensional vector, * c is a K-dimensional vector of parameters being fitted This subroutine uses only f(c,x[i]). INPUT PARAMETERS: X - array[0..N-1,0..M-1], points (one row = one point) Y - array[0..N-1], function values. W - weights, array[0..N-1] C - array[0..K-1], initial approximation to the solution, N - number of points, N>1 M - dimension of space K - number of parameters being fitted DiffStep- numerical differentiation step; should not be very small or large; large = loss of accuracy small = growth of round-off errors OUTPUT PARAMETERS: State - structure which stores algorithm state -- ALGLIB -- Copyright 18.10.2008 by Bochkanov Sergey *************************************************************************/
public static void lsfitcreatewf( double[,] x, double[] y, double[] w, double[] c, double diffstep, out lsfitstate state) public static void lsfitcreatewf( double[,] x, double[] y, double[] w, double[] c, int n, int m, int k, double diffstep, out lsfitstate state)

Examples:   [1]  [2]  

/************************************************************************* Weighted nonlinear least squares fitting using gradient only. Nonlinear task min(F(c)) is solved, where F(c) = (w[0]*(f(c,x[0])-y[0]))^2 + ... + (w[n-1]*(f(c,x[n-1])-y[n-1]))^2, * N is a number of points, * M is a dimension of a space points belong to, * K is a dimension of a space of parameters being fitted, * w is an N-dimensional vector of weight coefficients, * x is a set of N points, each of them is an M-dimensional vector, * c is a K-dimensional vector of parameters being fitted This subroutine uses only f(c,x[i]) and its gradient. INPUT PARAMETERS: X - array[0..N-1,0..M-1], points (one row = one point) Y - array[0..N-1], function values. W - weights, array[0..N-1] C - array[0..K-1], initial approximation to the solution, N - number of points, N>1 M - dimension of space K - number of parameters being fitted CheapFG - boolean flag, which is: * True if both function and gradient calculation complexity are less than O(M^2). An improved algorithm can be used which corresponds to FGJ scheme from MINLM unit. * False otherwise. Standard Jacibian-bases Levenberg-Marquardt algo will be used (FJ scheme). OUTPUT PARAMETERS: State - structure which stores algorithm state See also: LSFitResults LSFitCreateFG (fitting without weights) LSFitCreateWFGH (fitting using Hessian) LSFitCreateFGH (fitting using Hessian, without weights) -- ALGLIB -- Copyright 17.08.2009 by Bochkanov Sergey *************************************************************************/
public static void lsfitcreatewfg( double[,] x, double[] y, double[] w, double[] c, bool cheapfg, out lsfitstate state) public static void lsfitcreatewfg( double[,] x, double[] y, double[] w, double[] c, int n, int m, int k, bool cheapfg, out lsfitstate state)

Examples:   [1]  

/************************************************************************* Weighted nonlinear least squares fitting using gradient/Hessian. Nonlinear task min(F(c)) is solved, where F(c) = (w[0]*(f(c,x[0])-y[0]))^2 + ... + (w[n-1]*(f(c,x[n-1])-y[n-1]))^2, * N is a number of points, * M is a dimension of a space points belong to, * K is a dimension of a space of parameters being fitted, * w is an N-dimensional vector of weight coefficients, * x is a set of N points, each of them is an M-dimensional vector, * c is a K-dimensional vector of parameters being fitted This subroutine uses f(c,x[i]), its gradient and its Hessian. INPUT PARAMETERS: X - array[0..N-1,0..M-1], points (one row = one point) Y - array[0..N-1], function values. W - weights, array[0..N-1] C - array[0..K-1], initial approximation to the solution, N - number of points, N>1 M - dimension of space K - number of parameters being fitted OUTPUT PARAMETERS: State - structure which stores algorithm state -- ALGLIB -- Copyright 17.08.2009 by Bochkanov Sergey *************************************************************************/
public static void lsfitcreatewfgh( double[,] x, double[] y, double[] w, double[] c, out lsfitstate state) public static void lsfitcreatewfgh( double[,] x, double[] y, double[] w, double[] c, int n, int m, int k, out lsfitstate state)

Examples:   [1]  

/************************************************************************* This family of functions is used to launcn iterations of nonlinear fitter These functions accept following parameters: state - algorithm state func - callback which calculates function (or merit function) value func at given point x grad - callback which calculates function (or merit function) value func and gradient grad at given point x hess - callback which calculates function (or merit function) value func, gradient grad and Hessian hess at given point x rep - optional callback which is called after each iteration can be NULL ptr - optional pointer which is passed to func/grad/hess/jac/rep can be NULL NOTES: 1. this algorithm is somewhat unusual because it works with parameterized function f(C,X), where X is a function argument (we have many points which are characterized by different argument values), and C is a parameter to fit. For example, if we want to do linear fit by f(c0,c1,x) = c0*x+c1, then x will be argument, and {c0,c1} will be parameters. It is important to understand that this algorithm finds minimum in the space of function PARAMETERS (not arguments), so it needs derivatives of f() with respect to C, not X. In the example above it will need f=c0*x+c1 and {df/dc0,df/dc1} = {x,1} instead of {df/dx} = {c0}. 2. Callback functions accept C as the first parameter, and X as the second 3. If state was created with LSFitCreateFG(), algorithm needs just function and its gradient, but if state was created with LSFitCreateFGH(), algorithm will need function, gradient and Hessian. According to the said above, there ase several versions of this function, which accept different sets of callbacks. This flexibility opens way to subtle errors - you may create state with LSFitCreateFGH() (optimization using Hessian), but call function which does not accept Hessian. So when algorithm will request Hessian, there will be no callback to call. In this case exception will be thrown. Be careful to avoid such errors because there is no way to find them at compile time - you can see them at runtime only. -- ALGLIB -- Copyright 17.08.2009 by Bochkanov Sergey *************************************************************************/
public static void lsfitfit(lsfitstate state, ndimensional_pfunc func, ndimensional_rep rep, object obj) public static void lsfitfit(lsfitstate state, ndimensional_pfunc func, ndimensional_pgrad grad, ndimensional_rep rep, object obj) public static void lsfitfit(lsfitstate state, ndimensional_pfunc func, ndimensional_pgrad grad, ndimensional_phess hess, ndimensional_rep rep, object obj)

Examples:   [1]  [2]  [3]  [4]  [5]  

/************************************************************************* Linear least squares fitting. QR decomposition is used to reduce task to MxM, then triangular solver or SVD-based solver is used depending on condition number of the system. It allows to maximize speed and retain decent accuracy. INPUT PARAMETERS: Y - array[0..N-1] Function values in N points. FMatrix - a table of basis functions values, array[0..N-1, 0..M-1]. FMatrix[I, J] - value of J-th basis function in I-th point. N - number of points used. N>=1. M - number of basis functions, M>=1. OUTPUT PARAMETERS: Info - error code: * -4 internal SVD decomposition subroutine failed (very rare and for degenerate systems only) * 1 task is solved C - decomposition coefficients, array[0..M-1] Rep - fitting report. Following fields are set: * Rep.TaskRCond reciprocal of condition number * RMSError rms error on the (X,Y). * AvgError average error on the (X,Y). * AvgRelError average relative error on the non-zero Y * MaxError maximum error NON-WEIGHTED ERRORS ARE CALCULATED -- ALGLIB -- Copyright 17.08.2009 by Bochkanov Sergey *************************************************************************/
public static void lsfitlinear( double[] y, double[,] fmatrix, out int info, out double[] c, out lsfitreport rep) public static void lsfitlinear( double[] y, double[,] fmatrix, int n, int m, out int info, out double[] c, out lsfitreport rep)

Examples:   [1]  

/************************************************************************* Constained linear least squares fitting. This is variation of LSFitLinear(), which searchs for min|A*x=b| given that K additional constaints C*x=bc are satisfied. It reduces original task to modified one: min|B*y-d| WITHOUT constraints, then LSFitLinear() is called. INPUT PARAMETERS: Y - array[0..N-1] Function values in N points. FMatrix - a table of basis functions values, array[0..N-1, 0..M-1]. FMatrix[I,J] - value of J-th basis function in I-th point. CMatrix - a table of constaints, array[0..K-1,0..M]. I-th row of CMatrix corresponds to I-th linear constraint: CMatrix[I,0]*C[0] + ... + CMatrix[I,M-1]*C[M-1] = CMatrix[I,M] N - number of points used. N>=1. M - number of basis functions, M>=1. K - number of constraints, 0 <= K < M K=0 corresponds to absence of constraints. OUTPUT PARAMETERS: Info - error code: * -4 internal SVD decomposition subroutine failed (very rare and for degenerate systems only) * -3 either too many constraints (M or more), degenerate constraints (some constraints are repetead twice) or inconsistent constraints were specified. * 1 task is solved C - decomposition coefficients, array[0..M-1] Rep - fitting report. Following fields are set: * RMSError rms error on the (X,Y). * AvgError average error on the (X,Y). * AvgRelError average relative error on the non-zero Y * MaxError maximum error NON-WEIGHTED ERRORS ARE CALCULATED IMPORTANT: this subroitine doesn't calculate task's condition number for K<>0. -- ALGLIB -- Copyright 07.09.2009 by Bochkanov Sergey *************************************************************************/
public static void lsfitlinearc( double[] y, double[,] fmatrix, double[,] cmatrix, out int info, out double[] c, out lsfitreport rep) public static void lsfitlinearc( double[] y, double[,] fmatrix, double[,] cmatrix, int n, int m, int k, out int info, out double[] c, out lsfitreport rep)

Examples:   [1]  

/************************************************************************* Weighted linear least squares fitting. QR decomposition is used to reduce task to MxM, then triangular solver or SVD-based solver is used depending on condition number of the system. It allows to maximize speed and retain decent accuracy. INPUT PARAMETERS: Y - array[0..N-1] Function values in N points. W - array[0..N-1] Weights corresponding to function values. Each summand in square sum of approximation deviations from given values is multiplied by the square of corresponding weight. FMatrix - a table of basis functions values, array[0..N-1, 0..M-1]. FMatrix[I, J] - value of J-th basis function in I-th point. N - number of points used. N>=1. M - number of basis functions, M>=1. OUTPUT PARAMETERS: Info - error code: * -4 internal SVD decomposition subroutine failed (very rare and for degenerate systems only) * -1 incorrect N/M were specified * 1 task is solved C - decomposition coefficients, array[0..M-1] Rep - fitting report. Following fields are set: * Rep.TaskRCond reciprocal of condition number * RMSError rms error on the (X,Y). * AvgError average error on the (X,Y). * AvgRelError average relative error on the non-zero Y * MaxError maximum error NON-WEIGHTED ERRORS ARE CALCULATED -- ALGLIB -- Copyright 17.08.2009 by Bochkanov Sergey *************************************************************************/
public static void lsfitlinearw( double[] y, double[] w, double[,] fmatrix, out int info, out double[] c, out lsfitreport rep) public static void lsfitlinearw( double[] y, double[] w, double[,] fmatrix, int n, int m, out int info, out double[] c, out lsfitreport rep)

Examples:   [1]  

/************************************************************************* Weighted constained linear least squares fitting. This is variation of LSFitLinearW(), which searchs for min|A*x=b| given that K additional constaints C*x=bc are satisfied. It reduces original task to modified one: min|B*y-d| WITHOUT constraints, then LSFitLinearW() is called. INPUT PARAMETERS: Y - array[0..N-1] Function values in N points. W - array[0..N-1] Weights corresponding to function values. Each summand in square sum of approximation deviations from given values is multiplied by the square of corresponding weight. FMatrix - a table of basis functions values, array[0..N-1, 0..M-1]. FMatrix[I,J] - value of J-th basis function in I-th point. CMatrix - a table of constaints, array[0..K-1,0..M]. I-th row of CMatrix corresponds to I-th linear constraint: CMatrix[I,0]*C[0] + ... + CMatrix[I,M-1]*C[M-1] = CMatrix[I,M] N - number of points used. N>=1. M - number of basis functions, M>=1. K - number of constraints, 0 <= K < M K=0 corresponds to absence of constraints. OUTPUT PARAMETERS: Info - error code: * -4 internal SVD decomposition subroutine failed (very rare and for degenerate systems only) * -3 either too many constraints (M or more), degenerate constraints (some constraints are repetead twice) or inconsistent constraints were specified. * 1 task is solved C - decomposition coefficients, array[0..M-1] Rep - fitting report. Following fields are set: * RMSError rms error on the (X,Y). * AvgError average error on the (X,Y). * AvgRelError average relative error on the non-zero Y * MaxError maximum error NON-WEIGHTED ERRORS ARE CALCULATED IMPORTANT: this subroitine doesn't calculate task's condition number for K<>0. -- ALGLIB -- Copyright 07.09.2009 by Bochkanov Sergey *************************************************************************/
public static void lsfitlinearwc( double[] y, double[] w, double[,] fmatrix, double[,] cmatrix, out int info, out double[] c, out lsfitreport rep) public static void lsfitlinearwc( double[] y, double[] w, double[,] fmatrix, double[,] cmatrix, int n, int m, int k, out int info, out double[] c, out lsfitreport rep)

Examples:   [1]  

/************************************************************************* Nonlinear least squares fitting results. Called after return from LSFitFit(). INPUT PARAMETERS: State - algorithm state OUTPUT PARAMETERS: Info - completetion code: * 1 relative function improvement is no more than EpsF. * 2 relative step is no more than EpsX. * 4 gradient norm is no more than EpsG * 5 MaxIts steps was taken * 7 stopping conditions are too stringent, further improvement is impossible C - array[0..K-1], solution Rep - optimization report. Following fields are set: * Rep.TerminationType completetion code: * RMSError rms error on the (X,Y). * AvgError average error on the (X,Y). * AvgRelError average relative error on the non-zero Y * MaxError maximum error NON-WEIGHTED ERRORS ARE CALCULATED * WRMSError weighted rms error on the (X,Y). -- ALGLIB -- Copyright 17.08.2009 by Bochkanov Sergey *************************************************************************/
public static void lsfitresults( lsfitstate state, out int info, out double[] c, out lsfitreport rep)

Examples:   [1]  [2]  [3]  [4]  [5]  

/************************************************************************* This function sets boundary constraints for underlying optimizer Boundary constraints are inactive by default (after initial creation). They are preserved until explicitly turned off with another SetBC() call. INPUT PARAMETERS: State - structure stores algorithm state BndL - lower bounds, array[K]. If some (all) variables are unbounded, you may specify very small number or -INF (latter is recommended because it will allow solver to use better algorithm). BndU - upper bounds, array[K]. If some (all) variables are unbounded, you may specify very large number or +INF (latter is recommended because it will allow solver to use better algorithm). NOTE 1: it is possible to specify BndL[i]=BndU[i]. In this case I-th variable will be "frozen" at X[i]=BndL[i]=BndU[i]. NOTE 2: unlike other constrained optimization algorithms, this solver has following useful properties: * bound constraints are always satisfied exactly * function is evaluated only INSIDE area specified by bound constraints -- ALGLIB -- Copyright 14.01.2011 by Bochkanov Sergey *************************************************************************/
public static void lsfitsetbc( lsfitstate state, double[] bndl, double[] bndu)
/************************************************************************* Stopping conditions for nonlinear least squares fitting. INPUT PARAMETERS: State - structure which stores algorithm state EpsF - stopping criterion. Algorithm stops if |F(k+1)-F(k)| <= EpsF*max{|F(k)|, |F(k+1)|, 1} EpsX - >=0 The subroutine finishes its work if on k+1-th iteration the condition |v|<=EpsX is fulfilled, where: * |.| means Euclidian norm * v - scaled step vector, v[i]=dx[i]/s[i] * dx - ste pvector, dx=X(k+1)-X(k) * s - scaling coefficients set by LSFitSetScale() MaxIts - maximum number of iterations. If MaxIts=0, the number of iterations is unlimited. Only Levenberg-Marquardt iterations are counted (L-BFGS/CG iterations are NOT counted because their cost is very low compared to that of LM). NOTE Passing EpsF=0, EpsX=0 and MaxIts=0 (simultaneously) will lead to automatic stopping criterion selection (according to the scheme used by MINLM unit). -- ALGLIB -- Copyright 17.08.2009 by Bochkanov Sergey *************************************************************************/
public static void lsfitsetcond( lsfitstate state, double epsf, double epsx, int maxits)

Examples:   [1]  [2]  [3]  [4]  [5]  

/************************************************************************* This function sets scaling coefficients for underlying optimizer. ALGLIB optimizers use scaling matrices to test stopping conditions (step size and gradient are scaled before comparison with tolerances). Scale of the I-th variable is a translation invariant measure of: a) "how large" the variable is b) how large the step should be to make significant changes in the function Generally, scale is NOT considered to be a form of preconditioner. But LM optimizer is unique in that it uses scaling matrix both in the stopping condition tests and as Marquardt damping factor. Proper scaling is very important for the algorithm performance. It is less important for the quality of results, but still has some influence (it is easier to converge when variables are properly scaled, so premature stopping is possible when very badly scalled variables are combined with relaxed stopping conditions). INPUT PARAMETERS: State - structure stores algorithm state S - array[N], non-zero scaling coefficients S[i] may be negative, sign doesn't matter. -- ALGLIB -- Copyright 14.01.2011 by Bochkanov Sergey *************************************************************************/
public static void lsfitsetscale(lsfitstate state, double[] s)

Examples:   [1]  

/************************************************************************* This function sets maximum step length INPUT PARAMETERS: State - structure which stores algorithm state StpMax - maximum step length, >=0. Set StpMax to 0.0, if you don't want to limit step length. Use this subroutine when you optimize target function which contains exp() or other fast growing functions, and optimization algorithm makes too large steps which leads to overflow. This function allows us to reject steps that are too large (and therefore expose us to the possible overflow) without actually calculating function value at the x+stp*d. NOTE: non-zero StpMax leads to moderate performance degradation because intermediate step of preconditioned L-BFGS optimization is incompatible with limits on step size. -- ALGLIB -- Copyright 02.04.2010 by Bochkanov Sergey *************************************************************************/
public static void lsfitsetstpmax(lsfitstate state, double stpmax)
/************************************************************************* This function turns on/off reporting. INPUT PARAMETERS: State - structure which stores algorithm state NeedXRep- whether iteration reports are needed or not When reports are needed, State.C (current parameters) and State.F (current value of fitting function) are reported. -- ALGLIB -- Copyright 15.08.2010 by Bochkanov Sergey *************************************************************************/
public static void lsfitsetxrep(lsfitstate state, bool needxrep)
/************************************************************************* Fitting by polynomials in barycentric form. This function provides simple unterface for unconstrained unweighted fitting. See PolynomialFitWC() if you need constrained fitting. Task is linear, so linear least squares solver is used. Complexity of this computational scheme is O(N*M^2), mostly dominated by least squares solver SEE ALSO: PolynomialFitWC() INPUT PARAMETERS: X - points, array[0..N-1]. Y - function values, array[0..N-1]. N - number of points, N>0 * if given, only leading N elements of X/Y are used * if not given, automatically determined from sizes of X/Y M - number of basis functions (= polynomial_degree + 1), M>=1 OUTPUT PARAMETERS: Info- same format as in LSFitLinearW() subroutine: * Info>0 task is solved * Info<=0 an error occured: -4 means inconvergence of internal SVD P - interpolant in barycentric form. Rep - report, same format as in LSFitLinearW() subroutine. Following fields are set: * RMSError rms error on the (X,Y). * AvgError average error on the (X,Y). * AvgRelError average relative error on the non-zero Y * MaxError maximum error NON-WEIGHTED ERRORS ARE CALCULATED NOTES: you can convert P from barycentric form to the power or Chebyshev basis with PolynomialBar2Pow() or PolynomialBar2Cheb() functions from POLINT subpackage. -- ALGLIB PROJECT -- Copyright 10.12.2009 by Bochkanov Sergey *************************************************************************/
public static void polynomialfit( double[] x, double[] y, int m, out int info, out barycentricinterpolant p, out polynomialfitreport rep) public static void polynomialfit( double[] x, double[] y, int n, int m, out int info, out barycentricinterpolant p, out polynomialfitreport rep)

Examples:   [1]  

/************************************************************************* Weighted fitting by polynomials in barycentric form, with constraints on function values or first derivatives. Small regularizing term is used when solving constrained tasks (to improve stability). Task is linear, so linear least squares solver is used. Complexity of this computational scheme is O(N*M^2), mostly dominated by least squares solver SEE ALSO: PolynomialFit() INPUT PARAMETERS: X - points, array[0..N-1]. Y - function values, array[0..N-1]. W - weights, array[0..N-1] Each summand in square sum of approximation deviations from given values is multiplied by the square of corresponding weight. Fill it by 1's if you don't want to solve weighted task. N - number of points, N>0. * if given, only leading N elements of X/Y/W are used * if not given, automatically determined from sizes of X/Y/W XC - points where polynomial values/derivatives are constrained, array[0..K-1]. YC - values of constraints, array[0..K-1] DC - array[0..K-1], types of constraints: * DC[i]=0 means that P(XC[i])=YC[i] * DC[i]=1 means that P'(XC[i])=YC[i] SEE BELOW FOR IMPORTANT INFORMATION ON CONSTRAINTS K - number of constraints, 0<=K<M. K=0 means no constraints (XC/YC/DC are not used in such cases) M - number of basis functions (= polynomial_degree + 1), M>=1 OUTPUT PARAMETERS: Info- same format as in LSFitLinearW() subroutine: * Info>0 task is solved * Info<=0 an error occured: -4 means inconvergence of internal SVD -3 means inconsistent constraints P - interpolant in barycentric form. Rep - report, same format as in LSFitLinearW() subroutine. Following fields are set: * RMSError rms error on the (X,Y). * AvgError average error on the (X,Y). * AvgRelError average relative error on the non-zero Y * MaxError maximum error NON-WEIGHTED ERRORS ARE CALCULATED IMPORTANT: this subroitine doesn't calculate task's condition number for K<>0. NOTES: you can convert P from barycentric form to the power or Chebyshev basis with PolynomialBar2Pow() or PolynomialBar2Cheb() functions from POLINT subpackage. SETTING CONSTRAINTS - DANGERS AND OPPORTUNITIES: Setting constraints can lead to undesired results, like ill-conditioned behavior, or inconsistency being detected. From the other side, it allows us to improve quality of the fit. Here we summarize our experience with constrained regression splines: * even simple constraints can be inconsistent, see Wikipedia article on this subject: http://en.wikipedia.org/wiki/Birkhoff_interpolation * the greater is M (given fixed constraints), the more chances that constraints will be consistent * in the general case, consistency of constraints is NOT GUARANTEED. * in the one special cases, however, we can guarantee consistency. This case is: M>1 and constraints on the function values (NOT DERIVATIVES) Our final recommendation is to use constraints WHEN AND ONLY when you can't solve your task without them. Anything beyond special cases given above is not guaranteed and may result in inconsistency. -- ALGLIB PROJECT -- Copyright 10.12.2009 by Bochkanov Sergey *************************************************************************/
public static void polynomialfitwc( double[] x, double[] y, double[] w, double[] xc, double[] yc, int[] dc, int m, out int info, out barycentricinterpolant p, out polynomialfitreport rep) public static void polynomialfitwc( double[] x, double[] y, double[] w, int n, double[] xc, double[] yc, int[] dc, int k, int m, out int info, out barycentricinterpolant p, out polynomialfitreport rep)

Examples:   [1]  

/************************************************************************* Least squares fitting by cubic spline. This subroutine is "lightweight" alternative for more complex and feature- rich Spline1DFitCubicWC(). See Spline1DFitCubicWC() for more information about subroutine parameters (we don't duplicate it here because of length) -- ALGLIB PROJECT -- Copyright 18.08.2009 by Bochkanov Sergey *************************************************************************/
public static void spline1dfitcubic( double[] x, double[] y, int m, out int info, out spline1dinterpolant s, out spline1dfitreport rep) public static void spline1dfitcubic( double[] x, double[] y, int n, int m, out int info, out spline1dinterpolant s, out spline1dfitreport rep)
/************************************************************************* Weighted fitting by cubic spline, with constraints on function values or derivatives. Equidistant grid with M-2 nodes on [min(x,xc),max(x,xc)] is used to build basis functions. Basis functions are cubic splines with continuous second derivatives and non-fixed first derivatives at interval ends. Small regularizing term is used when solving constrained tasks (to improve stability). Task is linear, so linear least squares solver is used. Complexity of this computational scheme is O(N*M^2), mostly dominated by least squares solver SEE ALSO Spline1DFitHermiteWC() - fitting by Hermite splines (more flexible, less smooth) Spline1DFitCubic() - "lightweight" fitting by cubic splines, without invididual weights and constraints INPUT PARAMETERS: X - points, array[0..N-1]. Y - function values, array[0..N-1]. W - weights, array[0..N-1] Each summand in square sum of approximation deviations from given values is multiplied by the square of corresponding weight. Fill it by 1's if you don't want to solve weighted task. N - number of points (optional): * N>0 * if given, only first N elements of X/Y/W are processed * if not given, automatically determined from X/Y/W sizes XC - points where spline values/derivatives are constrained, array[0..K-1]. YC - values of constraints, array[0..K-1] DC - array[0..K-1], types of constraints: * DC[i]=0 means that S(XC[i])=YC[i] * DC[i]=1 means that S'(XC[i])=YC[i] SEE BELOW FOR IMPORTANT INFORMATION ON CONSTRAINTS K - number of constraints (optional): * 0<=K<M. * K=0 means no constraints (XC/YC/DC are not used) * if given, only first K elements of XC/YC/DC are used * if not given, automatically determined from XC/YC/DC M - number of basis functions ( = number_of_nodes+2), M>=4. OUTPUT PARAMETERS: Info- same format as in LSFitLinearWC() subroutine. * Info>0 task is solved * Info<=0 an error occured: -4 means inconvergence of internal SVD -3 means inconsistent constraints S - spline interpolant. Rep - report, same format as in LSFitLinearWC() subroutine. Following fields are set: * RMSError rms error on the (X,Y). * AvgError average error on the (X,Y). * AvgRelError average relative error on the non-zero Y * MaxError maximum error NON-WEIGHTED ERRORS ARE CALCULATED IMPORTANT: this subroitine doesn't calculate task's condition number for K<>0. ORDER OF POINTS Subroutine automatically sorts points, so caller may pass unsorted array. SETTING CONSTRAINTS - DANGERS AND OPPORTUNITIES: Setting constraints can lead to undesired results, like ill-conditioned behavior, or inconsistency being detected. From the other side, it allows us to improve quality of the fit. Here we summarize our experience with constrained regression splines: * excessive constraints can be inconsistent. Splines are piecewise cubic functions, and it is easy to create an example, where large number of constraints concentrated in small area will result in inconsistency. Just because spline is not flexible enough to satisfy all of them. And same constraints spread across the [min(x),max(x)] will be perfectly consistent. * the more evenly constraints are spread across [min(x),max(x)], the more chances that they will be consistent * the greater is M (given fixed constraints), the more chances that constraints will be consistent * in the general case, consistency of constraints IS NOT GUARANTEED. * in the several special cases, however, we CAN guarantee consistency. * one of this cases is constraints on the function values AND/OR its derivatives at the interval boundaries. * another special case is ONE constraint on the function value (OR, but not AND, derivative) anywhere in the interval Our final recommendation is to use constraints WHEN AND ONLY WHEN you can't solve your task without them. Anything beyond special cases given above is not guaranteed and may result in inconsistency. -- ALGLIB PROJECT -- Copyright 18.08.2009 by Bochkanov Sergey *************************************************************************/
public static void spline1dfitcubicwc( double[] x, double[] y, double[] w, double[] xc, double[] yc, int[] dc, int m, out int info, out spline1dinterpolant s, out spline1dfitreport rep) public static void spline1dfitcubicwc( double[] x, double[] y, double[] w, int n, double[] xc, double[] yc, int[] dc, int k, int m, out int info, out spline1dinterpolant s, out spline1dfitreport rep)
/************************************************************************* Least squares fitting by Hermite spline. This subroutine is "lightweight" alternative for more complex and feature- rich Spline1DFitHermiteWC(). See Spline1DFitHermiteWC() description for more information about subroutine parameters (we don't duplicate it here because of length). -- ALGLIB PROJECT -- Copyright 18.08.2009 by Bochkanov Sergey *************************************************************************/
public static void spline1dfithermite( double[] x, double[] y, int m, out int info, out spline1dinterpolant s, out spline1dfitreport rep) public static void spline1dfithermite( double[] x, double[] y, int n, int m, out int info, out spline1dinterpolant s, out spline1dfitreport rep)
/************************************************************************* Weighted fitting by Hermite spline, with constraints on function values or first derivatives. Equidistant grid with M nodes on [min(x,xc),max(x,xc)] is used to build basis functions. Basis functions are Hermite splines. Small regularizing term is used when solving constrained tasks (to improve stability). Task is linear, so linear least squares solver is used. Complexity of this computational scheme is O(N*M^2), mostly dominated by least squares solver SEE ALSO Spline1DFitCubicWC() - fitting by Cubic splines (less flexible, more smooth) Spline1DFitHermite() - "lightweight" Hermite fitting, without invididual weights and constraints INPUT PARAMETERS: X - points, array[0..N-1]. Y - function values, array[0..N-1]. W - weights, array[0..N-1] Each summand in square sum of approximation deviations from given values is multiplied by the square of corresponding weight. Fill it by 1's if you don't want to solve weighted task. N - number of points (optional): * N>0 * if given, only first N elements of X/Y/W are processed * if not given, automatically determined from X/Y/W sizes XC - points where spline values/derivatives are constrained, array[0..K-1]. YC - values of constraints, array[0..K-1] DC - array[0..K-1], types of constraints: * DC[i]=0 means that S(XC[i])=YC[i] * DC[i]=1 means that S'(XC[i])=YC[i] SEE BELOW FOR IMPORTANT INFORMATION ON CONSTRAINTS K - number of constraints (optional): * 0<=K<M. * K=0 means no constraints (XC/YC/DC are not used) * if given, only first K elements of XC/YC/DC are used * if not given, automatically determined from XC/YC/DC M - number of basis functions (= 2 * number of nodes), M>=4, M IS EVEN! OUTPUT PARAMETERS: Info- same format as in LSFitLinearW() subroutine: * Info>0 task is solved * Info<=0 an error occured: -4 means inconvergence of internal SVD -3 means inconsistent constraints -2 means odd M was passed (which is not supported) -1 means another errors in parameters passed (N<=0, for example) S - spline interpolant. Rep - report, same format as in LSFitLinearW() subroutine. Following fields are set: * RMSError rms error on the (X,Y). * AvgError average error on the (X,Y). * AvgRelError average relative error on the non-zero Y * MaxError maximum error NON-WEIGHTED ERRORS ARE CALCULATED IMPORTANT: this subroitine doesn't calculate task's condition number for K<>0. IMPORTANT: this subroitine supports only even M's ORDER OF POINTS Subroutine automatically sorts points, so caller may pass unsorted array. SETTING CONSTRAINTS - DANGERS AND OPPORTUNITIES: Setting constraints can lead to undesired results, like ill-conditioned behavior, or inconsistency being detected. From the other side, it allows us to improve quality of the fit. Here we summarize our experience with constrained regression splines: * excessive constraints can be inconsistent. Splines are piecewise cubic functions, and it is easy to create an example, where large number of constraints concentrated in small area will result in inconsistency. Just because spline is not flexible enough to satisfy all of them. And same constraints spread across the [min(x),max(x)] will be perfectly consistent. * the more evenly constraints are spread across [min(x),max(x)], the more chances that they will be consistent * the greater is M (given fixed constraints), the more chances that constraints will be consistent * in the general case, consistency of constraints is NOT GUARANTEED. * in the several special cases, however, we can guarantee consistency. * one of this cases is M>=4 and constraints on the function value (AND/OR its derivative) at the interval boundaries. * another special case is M>=4 and ONE constraint on the function value (OR, BUT NOT AND, derivative) anywhere in [min(x),max(x)] Our final recommendation is to use constraints WHEN AND ONLY when you can't solve your task without them. Anything beyond special cases given above is not guaranteed and may result in inconsistency. -- ALGLIB PROJECT -- Copyright 18.08.2009 by Bochkanov Sergey *************************************************************************/
public static void spline1dfithermitewc( double[] x, double[] y, double[] w, double[] xc, double[] yc, int[] dc, int m, out int info, out spline1dinterpolant s, out spline1dfitreport rep) public static void spline1dfithermitewc( double[] x, double[] y, double[] w, int n, double[] xc, double[] yc, int[] dc, int k, int m, out int info, out spline1dinterpolant s, out spline1dfitreport rep)
/************************************************************************* Rational least squares fitting using Floater-Hormann rational functions with optimal D chosen from [0,9]. Equidistant grid with M node on [min(x),max(x)] is used to build basis functions. Different values of D are tried, optimal D (least root mean square error) is chosen. Task is linear, so linear least squares solver is used. Complexity of this computational scheme is O(N*M^2) (mostly dominated by the least squares solver). INPUT PARAMETERS: X - points, array[0..N-1]. Y - function values, array[0..N-1]. N - number of points, N>0. M - number of basis functions ( = number_of_nodes), M>=2. OUTPUT PARAMETERS: Info- same format as in LSFitLinearWC() subroutine. * Info>0 task is solved * Info<=0 an error occured: -4 means inconvergence of internal SVD -3 means inconsistent constraints B - barycentric interpolant. Rep - report, same format as in LSFitLinearWC() subroutine. Following fields are set: * DBest best value of the D parameter * RMSError rms error on the (X,Y). * AvgError average error on the (X,Y). * AvgRelError average relative error on the non-zero Y * MaxError maximum error NON-WEIGHTED ERRORS ARE CALCULATED -- ALGLIB PROJECT -- Copyright 18.08.2009 by Bochkanov Sergey *************************************************************************/
public static void spline1dfitpenalized( double[] x, double[] y, int m, double rho, out int info, out spline1dinterpolant s, out spline1dfitreport rep) public static void spline1dfitpenalized( double[] x, double[] y, int n, int m, double rho, out int info, out spline1dinterpolant s, out spline1dfitreport rep)

Examples:   [1]  

/************************************************************************* Weighted fitting by penalized cubic spline. Equidistant grid with M nodes on [min(x,xc),max(x,xc)] is used to build basis functions. Basis functions are cubic splines with natural boundary conditions. Problem is regularized by adding non-linearity penalty to the usual least squares penalty function: S(x) = arg min { LS + P }, where LS = SUM { w[i]^2*(y[i] - S(x[i]))^2 } - least squares penalty P = C*10^rho*integral{ S''(x)^2*dx } - non-linearity penalty rho - tunable constant given by user C - automatically determined scale parameter, makes penalty invariant with respect to scaling of X, Y, W. INPUT PARAMETERS: X - points, array[0..N-1]. Y - function values, array[0..N-1]. W - weights, array[0..N-1] Each summand in square sum of approximation deviations from given values is multiplied by the square of corresponding weight. Fill it by 1's if you don't want to solve weighted problem. N - number of points (optional): * N>0 * if given, only first N elements of X/Y/W are processed * if not given, automatically determined from X/Y/W sizes M - number of basis functions ( = number_of_nodes), M>=4. Rho - regularization constant passed by user. It penalizes nonlinearity in the regression spline. It is logarithmically scaled, i.e. actual value of regularization constant is calculated as 10^Rho. It is automatically scaled so that: * Rho=2.0 corresponds to moderate amount of nonlinearity * generally, it should be somewhere in the [-8.0,+8.0] If you do not want to penalize nonlineary, pass small Rho. Values as low as -15 should work. OUTPUT PARAMETERS: Info- same format as in LSFitLinearWC() subroutine. * Info>0 task is solved * Info<=0 an error occured: -4 means inconvergence of internal SVD or Cholesky decomposition; problem may be too ill-conditioned (very rare) S - spline interpolant. Rep - Following fields are set: * RMSError rms error on the (X,Y). * AvgError average error on the (X,Y). * AvgRelError average relative error on the non-zero Y * MaxError maximum error NON-WEIGHTED ERRORS ARE CALCULATED IMPORTANT: this subroitine doesn't calculate task's condition number for K<>0. NOTE 1: additional nodes are added to the spline outside of the fitting interval to force linearity when x<min(x,xc) or x>max(x,xc). It is done for consistency - we penalize non-linearity at [min(x,xc),max(x,xc)], so it is natural to force linearity outside of this interval. NOTE 2: function automatically sorts points, so caller may pass unsorted array. -- ALGLIB PROJECT -- Copyright 19.10.2010 by Bochkanov Sergey *************************************************************************/
public static void spline1dfitpenalizedw( double[] x, double[] y, double[] w, int m, double rho, out int info, out spline1dinterpolant s, out spline1dfitreport rep) public static void spline1dfitpenalizedw( double[] x, double[] y, double[] w, int n, int m, double rho, out int info, out spline1dinterpolant s, out spline1dfitreport rep)

Examples:   [1]  


public static int Main(string[] args)
{
    //
    // In this example we demonstrate linear fitting by f(x|a) = a*exp(0.5*x).
    //
    // We have:
    // * y - vector of experimental data
    // * fmatrix -  matrix of basis functions calculated at sample points
    //              Actually, we have only one basis function F0 = exp(0.5*x).
    //
    double[,] fmatrix = new double[,]{{0.606531},{0.670320},{0.740818},{0.818731},{0.904837},{1.000000},{1.105171},{1.221403},{1.349859},{1.491825},{1.648721}};
    double[] y = new double[]{1.133719,1.306522,1.504604,1.554663,1.884638,2.072436,2.257285,2.534068,2.622017,2.897713,3.219371};
    int info;
    double[] c;
    alglib.lsfitreport rep;

    //
    // Linear fitting without weights
    //
    alglib.lsfitlinear(y, fmatrix, out info, out c, out rep);
    System.Console.WriteLine("{0}", info); // EXPECTED: 1
    System.Console.WriteLine("{0}", alglib.ap.format(c,4)); // EXPECTED: [1.98650]

    //
    // Linear fitting with individual weights.
    // Slightly different result is returned.
    //
    double[] w = new double[]{1.414213,1,1,1,1,1,1,1,1,1,1};
    alglib.lsfitlinearw(y, w, fmatrix, out info, out c, out rep);
    System.Console.WriteLine("{0}", info); // EXPECTED: 1
    System.Console.WriteLine("{0}", alglib.ap.format(c,4)); // EXPECTED: [1.983354]
    System.Console.ReadLine();
    return 0;
}



public static int Main(string[] args)
{
    //
    // In this example we demonstrate linear fitting by f(x|a,b) = a*x+b
    // with simple constraint f(0)=0.
    //
    // We have:
    // * y - vector of experimental data
    // * fmatrix -  matrix of basis functions sampled at [0,1] with step 0.2:
    //                  [ 1.0   0.0 ]
    //                  [ 1.0   0.2 ]
    //                  [ 1.0   0.4 ]
    //                  [ 1.0   0.6 ]
    //                  [ 1.0   0.8 ]
    //                  [ 1.0   1.0 ]
    //              first column contains value of first basis function (constant term)
    //              second column contains second basis function (linear term)
    // * cmatrix -  matrix of linear constraints:
    //                  [ 1.0  0.0  0.0 ]
    //              first two columns contain coefficients before basis functions,
    //              last column contains desired value of their sum.
    //              So [1,0,0] means "1*constant_term + 0*linear_term = 0" 
    //
    double[] y = new double[]{0.072436,0.246944,0.491263,0.522300,0.714064,0.921929};
    double[,] fmatrix = new double[,]{{1,0.0},{1,0.2},{1,0.4},{1,0.6},{1,0.8},{1,1.0}};
    double[,] cmatrix = new double[,]{{1,0,0}};
    int info;
    double[] c;
    alglib.lsfitreport rep;

    //
    // Constrained fitting without weights
    //
    alglib.lsfitlinearc(y, fmatrix, cmatrix, out info, out c, out rep);
    System.Console.WriteLine("{0}", info); // EXPECTED: 1
    System.Console.WriteLine("{0}", alglib.ap.format(c,3)); // EXPECTED: [0,0.932933]

    //
    // Constrained fitting with individual weights
    //
    double[] w = new double[]{1,1.414213,1,1,1,1};
    alglib.lsfitlinearwc(y, w, fmatrix, cmatrix, out info, out c, out rep);
    System.Console.WriteLine("{0}", info); // EXPECTED: 1
    System.Console.WriteLine("{0}", alglib.ap.format(c,3)); // EXPECTED: [0,0.938322]
    System.Console.ReadLine();
    return 0;
}


public static void function_cx_1_func(double[] c, double[] x, ref double func, object obj)
{
    // this callback calculates f(c,x)=exp(-c0*sqr(x0))
    // where x is a position on X-axis and c is adjustable parameter
    func = System.Math.Exp(-c[0]*x[0]*x[0]);
}
public static int Main(string[] args)
{
    //
    // In this example we demonstrate exponential fitting
    // by f(x) = exp(-c*x^2)
    // using function value only.
    //
    // Gradient is estimated using combination of numerical differences
    // and secant updates. diffstep variable stores differentiation step 
    // (we have to tell algorithm what step to use).
    //
    double[,] x = new double[,]{{-1},{-0.8},{-0.6},{-0.4},{-0.2},{0},{0.2},{0.4},{0.6},{0.8},{1.0}};
    double[] y = new double[]{0.223130,0.382893,0.582748,0.786628,0.941765,1.000000,0.941765,0.786628,0.582748,0.382893,0.223130};
    double[] c = new double[]{0.3};
    double epsf = 0;
    double epsx = 0.000001;
    int maxits = 0;
    int info;
    alglib.lsfitstate state;
    alglib.lsfitreport rep;
    double diffstep = 0.0001;

    //
    // Fitting without weights
    //
    alglib.lsfitcreatef(x, y, c, diffstep, out state);
    alglib.lsfitsetcond(state, epsf, epsx, maxits);
    alglib.lsfitfit(state, function_cx_1_func, null, null);
    alglib.lsfitresults(state, out info, out c, out rep);
    System.Console.WriteLine("{0}", info); // EXPECTED: 2
    System.Console.WriteLine("{0}", alglib.ap.format(c,1)); // EXPECTED: [1.5]

    //
    // Fitting with weights
    // (you can change weights and see how it changes result)
    //
    double[] w = new double[]{1,1,1,1,1,1,1,1,1,1,1};
    alglib.lsfitcreatewf(x, y, w, c, diffstep, out state);
    alglib.lsfitsetcond(state, epsf, epsx, maxits);
    alglib.lsfitfit(state, function_cx_1_func, null, null);
    alglib.lsfitresults(state, out info, out c, out rep);
    System.Console.WriteLine("{0}", info); // EXPECTED: 2
    System.Console.WriteLine("{0}", alglib.ap.format(c,1)); // EXPECTED: [1.5]
    System.Console.ReadLine();
    return 0;
}


public static void function_cx_1_func(double[] c, double[] x, ref double func, object obj)
{
    // this callback calculates f(c,x)=exp(-c0*sqr(x0))
    // where x is a position on X-axis and c is adjustable parameter
    func = System.Math.Exp(-c[0]*x[0]*x[0]);
}
public static int Main(string[] args)
{
    //
    // In this example we demonstrate exponential fitting by
    //     f(x) = exp(-c*x^2)
    // subject to bound constraints
    //     0.0 <= c <= 1.0
    // using function value only.
    //
    // Gradient is estimated using combination of numerical differences
    // and secant updates. diffstep variable stores differentiation step 
    // (we have to tell algorithm what step to use).
    //
    // Unconstrained solution is c=1.5, but because of constraints we should
    // get c=1.0 (at the boundary).
    //
    double[,] x = new double[,]{{-1},{-0.8},{-0.6},{-0.4},{-0.2},{0},{0.2},{0.4},{0.6},{0.8},{1.0}};
    double[] y = new double[]{0.223130,0.382893,0.582748,0.786628,0.941765,1.000000,0.941765,0.786628,0.582748,0.382893,0.223130};
    double[] c = new double[]{0.3};
    double[] bndl = new double[]{0.0};
    double[] bndu = new double[]{1.0};
    double epsf = 0;
    double epsx = 0.000001;
    int maxits = 0;
    int info;
    alglib.lsfitstate state;
    alglib.lsfitreport rep;
    double diffstep = 0.0001;

    alglib.lsfitcreatef(x, y, c, diffstep, out state);
    alglib.lsfitsetbc(state, bndl, bndu);
    alglib.lsfitsetcond(state, epsf, epsx, maxits);
    alglib.lsfitfit(state, function_cx_1_func, null, null);
    alglib.lsfitresults(state, out info, out c, out rep);
    System.Console.WriteLine("{0}", alglib.ap.format(c,1)); // EXPECTED: [1.0]
    System.Console.ReadLine();
    return 0;
}


public static void function_cx_1_func(double[] c, double[] x, ref double func, object obj)
{
    // this callback calculates f(c,x)=exp(-c0*sqr(x0))
    // where x is a position on X-axis and c is adjustable parameter
    func = System.Math.Exp(-c[0]*x[0]*x[0]);
}
public static void function_cx_1_grad(double[] c, double[] x, ref double func, double[] grad, object obj)
{
    // this callback calculates f(c,x)=exp(-c0*sqr(x0)) and gradient G={df/dc[i]}
    // where x is a position on X-axis and c is adjustable parameter.
    // IMPORTANT: gradient is calculated with respect to C, not to X
    func = System.Math.Exp(-c[0]*System.Math.Pow(x[0],2));
    grad[0] = -System.Math.Pow(x[0],2)*func;
}
public static int Main(string[] args)
{
    //
    // In this example we demonstrate exponential fitting
    // by f(x) = exp(-c*x^2)
    // using function value and gradient (with respect to c).
    //
    double[,] x = new double[,]{{-1},{-0.8},{-0.6},{-0.4},{-0.2},{0},{0.2},{0.4},{0.6},{0.8},{1.0}};
    double[] y = new double[]{0.223130,0.382893,0.582748,0.786628,0.941765,1.000000,0.941765,0.786628,0.582748,0.382893,0.223130};
    double[] c = new double[]{0.3};
    double epsf = 0;
    double epsx = 0.000001;
    int maxits = 0;
    int info;
    alglib.lsfitstate state;
    alglib.lsfitreport rep;

    //
    // Fitting without weights
    //
    alglib.lsfitcreatefg(x, y, c, true, out state);
    alglib.lsfitsetcond(state, epsf, epsx, maxits);
    alglib.lsfitfit(state, function_cx_1_func, function_cx_1_grad, null, null);
    alglib.lsfitresults(state, out info, out c, out rep);
    System.Console.WriteLine("{0}", info); // EXPECTED: 2
    System.Console.WriteLine("{0}", alglib.ap.format(c,1)); // EXPECTED: [1.5]

    //
    // Fitting with weights
    // (you can change weights and see how it changes result)
    //
    double[] w = new double[]{1,1,1,1,1,1,1,1,1,1,1};
    alglib.lsfitcreatewfg(x, y, w, c, true, out state);
    alglib.lsfitsetcond(state, epsf, epsx, maxits);
    alglib.lsfitfit(state, function_cx_1_func, function_cx_1_grad, null, null);
    alglib.lsfitresults(state, out info, out c, out rep);
    System.Console.WriteLine("{0}", info); // EXPECTED: 2
    System.Console.WriteLine("{0}", alglib.ap.format(c,1)); // EXPECTED: [1.5]
    System.Console.ReadLine();
    return 0;
}


public static void function_cx_1_func(double[] c, double[] x, ref double func, object obj)
{
    // this callback calculates f(c,x)=exp(-c0*sqr(x0))
    // where x is a position on X-axis and c is adjustable parameter
    func = System.Math.Exp(-c[0]*x[0]*x[0]);
}
public static void function_cx_1_grad(double[] c, double[] x, ref double func, double[] grad, object obj)
{
    // this callback calculates f(c,x)=exp(-c0*sqr(x0)) and gradient G={df/dc[i]}
    // where x is a position on X-axis and c is adjustable parameter.
    // IMPORTANT: gradient is calculated with respect to C, not to X
    func = System.Math.Exp(-c[0]*System.Math.Pow(x[0],2));
    grad[0] = -System.Math.Pow(x[0],2)*func;
}
public static void function_cx_1_hess(double[] c, double[] x, ref double func, double[] grad, double[,] hess, object obj)
{
    // this callback calculates f(c,x)=exp(-c0*sqr(x0)), gradient G={df/dc[i]} and Hessian H={d2f/(dc[i]*dc[j])}
    // where x is a position on X-axis and c is adjustable parameter.
    // IMPORTANT: gradient/Hessian are calculated with respect to C, not to X
    func = System.Math.Exp(-c[0]*System.Math.Pow(x[0],2));
    grad[0] = -System.Math.Pow(x[0],2)*func;
    hess[0,0] = System.Math.Pow(x[0],4)*func;
}
public static int Main(string[] args)
{
    //
    // In this example we demonstrate exponential fitting
    // by f(x) = exp(-c*x^2)
    // using function value, gradient and Hessian (with respect to c)
    //
    double[,] x = new double[,]{{-1},{-0.8},{-0.6},{-0.4},{-0.2},{0},{0.2},{0.4},{0.6},{0.8},{1.0}};
    double[] y = new double[]{0.223130,0.382893,0.582748,0.786628,0.941765,1.000000,0.941765,0.786628,0.582748,0.382893,0.223130};
    double[] c = new double[]{0.3};
    double epsf = 0;
    double epsx = 0.000001;
    int maxits = 0;
    int info;
    alglib.lsfitstate state;
    alglib.lsfitreport rep;

    //
    // Fitting without weights
    //
    alglib.lsfitcreatefgh(x, y, c, out state);
    alglib.lsfitsetcond(state, epsf, epsx, maxits);
    alglib.lsfitfit(state, function_cx_1_func, function_cx_1_grad, function_cx_1_hess, null, null);
    alglib.lsfitresults(state, out info, out c, out rep);
    System.Console.WriteLine("{0}", info); // EXPECTED: 2
    System.Console.WriteLine("{0}", alglib.ap.format(c,1)); // EXPECTED: [1.5]

    //
    // Fitting with weights
    // (you can change weights and see how it changes result)
    //
    double[] w = new double[]{1,1,1,1,1,1,1,1,1,1,1};
    alglib.lsfitcreatewfgh(x, y, w, c, out state);
    alglib.lsfitsetcond(state, epsf, epsx, maxits);
    alglib.lsfitfit(state, function_cx_1_func, function_cx_1_grad, function_cx_1_hess, null, null);
    alglib.lsfitresults(state, out info, out c, out rep);
    System.Console.WriteLine("{0}", info); // EXPECTED: 2
    System.Console.WriteLine("{0}", alglib.ap.format(c,1)); // EXPECTED: [1.5]
    System.Console.ReadLine();
    return 0;
}


public static void function_debt_func(double[] c, double[] x, ref double func, object obj)
{
    //
    // this callback calculates f(c,x)=c[0]*(1+c[1]*(pow(x[0]-1999,c[2])-1))
    //
    func = c[0]*(1+c[1]*(System.Math.Pow(x[0]-1999,c[2])-1));
}
public static int Main(string[] args)
{
    //
    // In this example we demonstrate fitting by
    //     f(x) = c[0]*(1+c[1]*((x-1999)^c[2]-1))
    // subject to bound constraints
    //     -INF  < c[0] < +INF
    //      -10 <= c[1] <= +10
    //      0.1 <= c[2] <= 2.0
    // Data we want to fit are time series of Japan national debt
    // collected from 2000 to 2008 measured in USD (dollars, not
    // millions of dollars).
    //
    // Our variables are:
    //     c[0] - debt value at initial moment (2000),
    //     c[1] - direction coefficient (growth or decrease),
    //     c[2] - curvature coefficient.
    // You may see that our variables are badly scaled - first one 
    // is order of 10^12, and next two are somewhere about 1 in 
    // magnitude. Such problem is difficult to solve without some
    // kind of scaling.
    // That is exactly where lsfitsetscale() function can be used.
    // We set scale of our variables to [1.0E12, 1, 1], which allows
    // us to easily solve this problem.
    //
    // You can try commenting out lsfitsetscale() call - and you will 
    // see that algorithm will fail to converge.
    //
    double[,] x = new double[,]{{2000},{2001},{2002},{2003},{2004},{2005},{2006},{2007},{2008}};
    double[] y = new double[]{4323239600000.0,4560913100000.0,5564091500000.0,6743189300000.0,7284064600000.0,7050129600000.0,7092221500000.0,8483907600000.0,8625804400000.0};
    double[] c = new double[]{1.0e+13,1,1};
    double epsf = 0;
    double epsx = 1.0e-5;
    double[] bndl = new double[]{-System.Double.PositiveInfinity,-10,0.1};
    double[] bndu = new double[]{System.Double.PositiveInfinity,+10,2.0};
    double[] s = new double[]{1.0e+12,1,1};
    int maxits = 0;
    int info;
    alglib.lsfitstate state;
    alglib.lsfitreport rep;
    double diffstep = 1.0e-5;

    alglib.lsfitcreatef(x, y, c, diffstep, out state);
    alglib.lsfitsetcond(state, epsf, epsx, maxits);
    alglib.lsfitsetbc(state, bndl, bndu);
    alglib.lsfitsetscale(state, s);
    alglib.lsfitfit(state, function_debt_func, null, null);
    alglib.lsfitresults(state, out info, out c, out rep);
    System.Console.WriteLine("{0}", info); // EXPECTED: 2
    System.Console.WriteLine("{0}", alglib.ap.format(c,-2)); // EXPECTED: [4.142560E+12, 0.434240, 0.565376]
    System.Console.ReadLine();
    return 0;
}



public static int Main(string[] args)
{
    //
    // This example demonstrates polynomial fitting.
    //
    // Fitting is done by two (M=2) functions from polynomial basis:
    //     f0 = 1
    //     f1 = x
    // Basically, it just a linear fit; more complex polynomials may be used
    // (e.g. parabolas with M=3, cubic with M=4), but even such simple fit allows
    // us to demonstrate polynomialfit() function in action.
    //
    // We have:
    // * x      set of abscissas
    // * y      experimental data
    //
    // Additionally we demonstrate weighted fitting, where second point has
    // more weight than other ones.
    //
    double[] x = new double[]{0.0,0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1.0};
    double[] y = new double[]{0.00,0.05,0.26,0.32,0.33,0.43,0.60,0.60,0.77,0.98,1.02};
    int m = 2;
    double t = 2;
    int info;
    alglib.barycentricinterpolant p;
    alglib.polynomialfitreport rep;
    double v;

    //
    // Fitting without individual weights
    //
    // NOTE: result is returned as barycentricinterpolant structure.
    //       if you want to get representation in the power basis,
    //       you can use barycentricbar2pow() function to convert
    //       from barycentric to power representation (see docs for 
    //       POLINT subpackage for more info).
    //
    alglib.polynomialfit(x, y, m, out info, out p, out rep);
    v = alglib.barycentriccalc(p, t);
    System.Console.WriteLine("{0:F2", v); // EXPECTED: 2.011

    //
    // Fitting with individual weights
    //
    // NOTE: slightly different result is returned
    //
    double[] w = new double[]{1,1.414213562,1,1,1,1,1,1,1,1,1};
    double[] xc = new double[0];
    double[] yc = new double[0];
    int[] dc = new int[0];
    alglib.polynomialfitwc(x, y, w, xc, yc, dc, m, out info, out p, out rep);
    v = alglib.barycentriccalc(p, t);
    System.Console.WriteLine("{0:F2", v); // EXPECTED: 2.023
    System.Console.ReadLine();
    return 0;
}



public static int Main(string[] args)
{
    //
    // This example demonstrates polynomial fitting.
    //
    // Fitting is done by two (M=2) functions from polynomial basis:
    //     f0 = 1
    //     f1 = x
    // with simple constraint on function value
    //     f(0) = 0
    // Basically, it just a linear fit; more complex polynomials may be used
    // (e.g. parabolas with M=3, cubic with M=4), but even such simple fit allows
    // us to demonstrate polynomialfit() function in action.
    //
    // We have:
    // * x      set of abscissas
    // * y      experimental data
    // * xc     points where constraints are placed
    // * yc     constraints on derivatives
    // * dc     derivative indices
    //          (0 means function itself, 1 means first derivative)
    //
    double[] x = new double[]{1.0,1.0};
    double[] y = new double[]{0.9,1.1};
    double[] w = new double[]{1,1};
    double[] xc = new double[]{0};
    double[] yc = new double[]{0};
    int[] dc = new int[]{0};
    double t = 2;
    int m = 2;
    int info;
    alglib.barycentricinterpolant p;
    alglib.polynomialfitreport rep;
    double v;

    alglib.polynomialfitwc(x, y, w, xc, yc, dc, m, out info, out p, out rep);
    v = alglib.barycentriccalc(p, t);
    System.Console.WriteLine("{0:F2", v); // EXPECTED: 2.000
    System.Console.ReadLine();
    return 0;
}



public static int Main(string[] args)
{
    //
    // In this example we demonstrate penalized spline fitting of noisy data
    //
    // We have:
    // * x - abscissas
    // * y - vector of experimental data, straight line with small noise
    //
    double[] x = new double[]{0.00,0.10,0.20,0.30,0.40,0.50,0.60,0.70,0.80,0.90};
    double[] y = new double[]{0.10,0.00,0.30,0.40,0.30,0.40,0.62,0.68,0.75,0.95};
    int info;
    double v;
    alglib.spline1dinterpolant s;
    alglib.spline1dfitreport rep;
    double rho;

    //
    // Fit with VERY small amount of smoothing (rho = -5.0)
    // and large number of basis functions (M=50).
    //
    // With such small regularization penalized spline almost fully reproduces function values
    //
    rho = -5.0;
    alglib.spline1dfitpenalized(x, y, 50, rho, out info, out s, out rep);
    System.Console.WriteLine("{0}", info); // EXPECTED: 1
    v = alglib.spline1dcalc(s, 0.0);
    System.Console.WriteLine("{0:F1", v); // EXPECTED: 0.10

    //
    // Fit with VERY large amount of smoothing (rho = 10.0)
    // and large number of basis functions (M=50).
    //
    // With such regularization our spline should become close to the straight line fit.
    // We will compare its value in x=1.0 with results obtained from such fit.
    //
    rho = +10.0;
    alglib.spline1dfitpenalized(x, y, 50, rho, out info, out s, out rep);
    System.Console.WriteLine("{0}", info); // EXPECTED: 1
    v = alglib.spline1dcalc(s, 1.0);
    System.Console.WriteLine("{0:F2", v); // EXPECTED: 0.969

    //
    // In real life applications you may need some moderate degree of fitting,
    // so we try to fit once more with rho=3.0.
    //
    rho = +3.0;
    alglib.spline1dfitpenalized(x, y, 50, rho, out info, out s, out rep);
    System.Console.WriteLine("{0}", info); // EXPECTED: 1
    System.Console.ReadLine();
    return 0;
}


mannwhitneyutest
/************************************************************************* Mann-Whitney U-test This test checks hypotheses about whether X and Y are samples of two continuous distributions of the same shape and same median or whether their medians are different. The following tests are performed: * two-tailed test (null hypothesis - the medians are equal) * left-tailed test (null hypothesis - the median of the first sample is greater than or equal to the median of the second sample) * right-tailed test (null hypothesis - the median of the first sample is less than or equal to the median of the second sample). Requirements: * the samples are independent * X and Y are continuous distributions (or discrete distributions well- approximating continuous distributions) * distributions of X and Y have the same shape. The only possible difference is their position (i.e. the value of the median) * the number of elements in each sample is not less than 5 * the scale of measurement should be ordinal, interval or ratio (i.e. the test could not be applied to nominal variables). The test is non-parametric and doesn't require distributions to be normal. Input parameters: X - sample 1. Array whose index goes from 0 to N-1. N - size of the sample. N>=5 Y - sample 2. Array whose index goes from 0 to M-1. M - size of the sample. M>=5 Output parameters: BothTails - p-value for two-tailed test. If BothTails is less than the given significance level the null hypothesis is rejected. LeftTail - p-value for left-tailed test. If LeftTail is less than the given significance level, the null hypothesis is rejected. RightTail - p-value for right-tailed test. If RightTail is less than the given significance level the null hypothesis is rejected. To calculate p-values, special approximation is used. This method lets us calculate p-values with satisfactory accuracy in interval [0.0001, 1]. There is no approximation outside the [0.0001, 1] interval. Therefore, if the significance level outlies this interval, the test returns 0.0001. Relative precision of approximation of p-value: N M Max.err. Rms.err. 5..10 N..10 1.4e-02 6.0e-04 5..10 N..100 2.2e-02 5.3e-06 10..15 N..15 1.0e-02 3.2e-04 10..15 N..100 1.0e-02 2.2e-05 15..100 N..100 6.1e-03 2.7e-06 For N,M>100 accuracy checks weren't put into practice, but taking into account characteristics of asymptotic approximation used, precision should not be sharply different from the values for interval [5, 100]. -- ALGLIB -- Copyright 09.04.2007 by Bochkanov Sergey *************************************************************************/
public static void mannwhitneyutest( double[] x, int n, double[] y, int m, out double bothtails, out double lefttail, out double righttail)
cmatrixdet
cmatrixludet
rmatrixdet
rmatrixludet
spdmatrixcholeskydet
spdmatrixdet
matdet_d_1 Determinant calculation, real matrix, short form
matdet_d_2 Determinant calculation, real matrix, full form
matdet_d_3 Determinant calculation, complex matrix, short form
matdet_d_4 Determinant calculation, complex matrix, full form
matdet_d_5 Determinant calculation, complex matrix with zero imaginary part, short form
/************************************************************************* Calculation of the determinant of a general matrix Input parameters: A - matrix, array[0..N-1, 0..N-1] N - (optional) size of matrix A: * if given, only principal NxN submatrix is processed and overwritten. other elements are unchanged. * if not given, automatically determined from matrix size (A must be square matrix) Result: determinant of matrix A. -- ALGLIB -- Copyright 2005 by Bochkanov Sergey *************************************************************************/
public static complex cmatrixdet(complex[,] a) public static complex cmatrixdet(complex[,] a, int n)

Examples:   [1]  [2]  [3]  

/************************************************************************* Determinant calculation of the matrix given by its LU decomposition. Input parameters: A - LU decomposition of the matrix (output of RMatrixLU subroutine). Pivots - table of permutations which were made during the LU decomposition. Output of RMatrixLU subroutine. N - (optional) size of matrix A: * if given, only principal NxN submatrix is processed and overwritten. other elements are unchanged. * if not given, automatically determined from matrix size (A must be square matrix) Result: matrix determinant. -- ALGLIB -- Copyright 2005 by Bochkanov Sergey *************************************************************************/
public static complex cmatrixludet(complex[,] a, int[] pivots) public static complex cmatrixludet(complex[,] a, int[] pivots, int n)
/************************************************************************* Calculation of the determinant of a general matrix Input parameters: A - matrix, array[0..N-1, 0..N-1] N - (optional) size of matrix A: * if given, only principal NxN submatrix is processed and overwritten. other elements are unchanged. * if not given, automatically determined from matrix size (A must be square matrix) Result: determinant of matrix A. -- ALGLIB -- Copyright 2005 by Bochkanov Sergey *************************************************************************/
public static double rmatrixdet(double[,] a) public static double rmatrixdet(double[,] a, int n)

Examples:   [1]  [2]  

/************************************************************************* Determinant calculation of the matrix given by its LU decomposition. Input parameters: A - LU decomposition of the matrix (output of RMatrixLU subroutine). Pivots - table of permutations which were made during the LU decomposition. Output of RMatrixLU subroutine. N - (optional) size of matrix A: * if given, only principal NxN submatrix is processed and overwritten. other elements are unchanged. * if not given, automatically determined from matrix size (A must be square matrix) Result: matrix determinant. -- ALGLIB -- Copyright 2005 by Bochkanov Sergey *************************************************************************/
public static double rmatrixludet(double[,] a, int[] pivots) public static double rmatrixludet(double[,] a, int[] pivots, int n)
/************************************************************************* Determinant calculation of the matrix given by the Cholesky decomposition. Input parameters: A - Cholesky decomposition, output of SMatrixCholesky subroutine. N - (optional) size of matrix A: * if given, only principal NxN submatrix is processed and overwritten. other elements are unchanged. * if not given, automatically determined from matrix size (A must be square matrix) As the determinant is equal to the product of squares of diagonal elements, it’s not necessary to specify which triangle - lower or upper - the matrix is stored in. Result: matrix determinant. -- ALGLIB -- Copyright 2005-2008 by Bochkanov Sergey *************************************************************************/
public static double spdmatrixcholeskydet(double[,] a) public static double spdmatrixcholeskydet(double[,] a, int n)
/************************************************************************* Determinant calculation of the symmetric positive definite matrix. Input parameters: A - matrix. Array with elements [0..N-1, 0..N-1]. N - (optional) size of matrix A: * if given, only principal NxN submatrix is processed and overwritten. other elements are unchanged. * if not given, automatically determined from matrix size (A must be square matrix) IsUpper - (optional) storage type: * if True, symmetric matrix A is given by its upper triangle, and the lower triangle isn’t used/changed by function * if False, symmetric matrix A is given by its lower triangle, and the upper triangle isn’t used/changed by function * if not given, both lower and upper triangles must be filled. Result: determinant of matrix A. If matrix A is not positive definite, exception is thrown. -- ALGLIB -- Copyright 2005-2008 by Bochkanov Sergey *************************************************************************/
public static double spdmatrixdet(double[,] a) public static double spdmatrixdet(double[,] a, int n, bool isupper)

public static int Main(string[] args)
{
    double[,] b = new double[,]{{1,2},{2,1}};
    double a;
    a = alglib.rmatrixdet(b);
    System.Console.WriteLine("{0:F3", a); // EXPECTED: -3
    System.Console.ReadLine();
    return 0;
}



public static int Main(string[] args)
{
    double[,] b = new double[,]{{5,4},{4,5}};
    double a;
    a = alglib.rmatrixdet(b, 2);
    System.Console.WriteLine("{0:F3", a); // EXPECTED: 9
    System.Console.ReadLine();
    return 0;
}



public static int Main(string[] args)
{
    alglib.complex[,] b = new alglib.complex[,]{{new alglib.complex(1,+1),2},{2,new alglib.complex(1,-1)}};
    alglib.complex a;
    a = alglib.cmatrixdet(b);
    System.Console.WriteLine("{0}", alglib.ap.format(a,3)); // EXPECTED: -2
    System.Console.ReadLine();
    return 0;
}



public static int Main(string[] args)
{
    alglib.complex a;
    alglib.complex[,] b = new alglib.complex[,]{{new alglib.complex(0,5),4},{new alglib.complex(0,4),5}};
    a = alglib.cmatrixdet(b, 2);
    System.Console.WriteLine("{0}", alglib.ap.format(a,3)); // EXPECTED: 9i
    System.Console.ReadLine();
    return 0;
}



public static int Main(string[] args)
{
    alglib.complex a;
    alglib.complex[,] b = new alglib.complex[,]{{9,1},{2,1}};
    a = alglib.cmatrixdet(b);
    System.Console.WriteLine("{0}", alglib.ap.format(a,3)); // EXPECTED: 7
    System.Console.ReadLine();
    return 0;
}


cmatrixrndcond
cmatrixrndorthogonal
cmatrixrndorthogonalfromtheleft
cmatrixrndorthogonalfromtheright
hmatrixrndcond
hmatrixrndmultiply
hpdmatrixrndcond
rmatrixrndcond
rmatrixrndorthogonal
rmatrixrndorthogonalfromtheleft
rmatrixrndorthogonalfromtheright
smatrixrndcond
smatrixrndmultiply
spdmatrixrndcond
/************************************************************************* Generation of random NxN complex matrix with given condition number C and norm2(A)=1 INPUT PARAMETERS: N - matrix size C - condition number (in 2-norm) OUTPUT PARAMETERS: A - random matrix with norm2(A)=1 and cond(A)=C -- ALGLIB routine -- 04.12.2009 Bochkanov Sergey *************************************************************************/
public static void cmatrixrndcond(int n, double c, out complex[,] a)
/************************************************************************* Generation of a random Haar distributed orthogonal complex matrix INPUT PARAMETERS: N - matrix size, N>=1 OUTPUT PARAMETERS: A - orthogonal NxN matrix, array[0..N-1,0..N-1] -- ALGLIB routine -- 04.12.2009 Bochkanov Sergey *************************************************************************/
public static void cmatrixrndorthogonal(int n, out complex[,] a)
/************************************************************************* Multiplication of MxN complex matrix by MxM random Haar distributed complex orthogonal matrix INPUT PARAMETERS: A - matrix, array[0..M-1, 0..N-1] M, N- matrix size OUTPUT PARAMETERS: A - Q*A, where Q is random MxM orthogonal matrix -- ALGLIB routine -- 04.12.2009 Bochkanov Sergey *************************************************************************/
public static void cmatrixrndorthogonalfromtheleft( ref complex[,] a, int m, int n)
/************************************************************************* Multiplication of MxN complex matrix by NxN random Haar distributed complex orthogonal matrix INPUT PARAMETERS: A - matrix, array[0..M-1, 0..N-1] M, N- matrix size OUTPUT PARAMETERS: A - A*Q, where Q is random NxN orthogonal matrix -- ALGLIB routine -- 04.12.2009 Bochkanov Sergey *************************************************************************/
public static void cmatrixrndorthogonalfromtheright( ref complex[,] a, int m, int n)
/************************************************************************* Generation of random NxN Hermitian matrix with given condition number and norm2(A)=1 INPUT PARAMETERS: N - matrix size C - condition number (in 2-norm) OUTPUT PARAMETERS: A - random matrix with norm2(A)=1 and cond(A)=C -- ALGLIB routine -- 04.12.2009 Bochkanov Sergey *************************************************************************/
public static void hmatrixrndcond(int n, double c, out complex[,] a)
/************************************************************************* Hermitian multiplication of NxN matrix by random Haar distributed complex orthogonal matrix INPUT PARAMETERS: A - matrix, array[0..N-1, 0..N-1] N - matrix size OUTPUT PARAMETERS: A - Q^H*A*Q, where Q is random NxN orthogonal matrix -- ALGLIB routine -- 04.12.2009 Bochkanov Sergey *************************************************************************/
public static void hmatrixrndmultiply(ref complex[,] a, int n)
/************************************************************************* Generation of random NxN Hermitian positive definite matrix with given condition number and norm2(A)=1 INPUT PARAMETERS: N - matrix size C - condition number (in 2-norm) OUTPUT PARAMETERS: A - random HPD matrix with norm2(A)=1 and cond(A)=C -- ALGLIB routine -- 04.12.2009 Bochkanov Sergey *************************************************************************/
public static void hpdmatrixrndcond(int n, double c, out complex[,] a)
/************************************************************************* Generation of random NxN matrix with given condition number and norm2(A)=1 INPUT PARAMETERS: N - matrix size C - condition number (in 2-norm) OUTPUT PARAMETERS: A - random matrix with norm2(A)=1 and cond(A)=C -- ALGLIB routine -- 04.12.2009 Bochkanov Sergey *************************************************************************/
public static void rmatrixrndcond(int n, double c, out double[,] a)
/************************************************************************* Generation of a random uniformly distributed (Haar) orthogonal matrix INPUT PARAMETERS: N - matrix size, N>=1 OUTPUT PARAMETERS: A - orthogonal NxN matrix, array[0..N-1,0..N-1] -- ALGLIB routine -- 04.12.2009 Bochkanov Sergey *************************************************************************/
public static void rmatrixrndorthogonal(int n, out double[,] a)
/************************************************************************* Multiplication of MxN matrix by MxM random Haar distributed orthogonal matrix INPUT PARAMETERS: A - matrix, array[0..M-1, 0..N-1] M, N- matrix size OUTPUT PARAMETERS: A - Q*A, where Q is random MxM orthogonal matrix -- ALGLIB routine -- 04.12.2009 Bochkanov Sergey *************************************************************************/
public static void rmatrixrndorthogonalfromtheleft( ref double[,] a, int m, int n)
/************************************************************************* Multiplication of MxN matrix by NxN random Haar distributed orthogonal matrix INPUT PARAMETERS: A - matrix, array[0..M-1, 0..N-1] M, N- matrix size OUTPUT PARAMETERS: A - A*Q, where Q is random NxN orthogonal matrix -- ALGLIB routine -- 04.12.2009 Bochkanov Sergey *************************************************************************/
public static void rmatrixrndorthogonalfromtheright( ref double[,] a, int m, int n)
/************************************************************************* Generation of random NxN symmetric matrix with given condition number and norm2(A)=1 INPUT PARAMETERS: N - matrix size C - condition number (in 2-norm) OUTPUT PARAMETERS: A - random matrix with norm2(A)=1 and cond(A)=C -- ALGLIB routine -- 04.12.2009 Bochkanov Sergey *************************************************************************/
public static void smatrixrndcond(int n, double c, out double[,] a)
/************************************************************************* Symmetric multiplication of NxN matrix by random Haar distributed orthogonal matrix INPUT PARAMETERS: A - matrix, array[0..N-1, 0..N-1] N - matrix size OUTPUT PARAMETERS: A - Q'*A*Q, where Q is random NxN orthogonal matrix -- ALGLIB routine -- 04.12.2009 Bochkanov Sergey *************************************************************************/
public static void smatrixrndmultiply(ref double[,] a, int n)
/************************************************************************* Generation of random NxN symmetric positive definite matrix with given condition number and norm2(A)=1 INPUT PARAMETERS: N - matrix size C - condition number (in 2-norm) OUTPUT PARAMETERS: A - random SPD matrix with norm2(A)=1 and cond(A)=C -- ALGLIB routine -- 04.12.2009 Bochkanov Sergey *************************************************************************/
public static void spdmatrixrndcond(int n, double c, out double[,] a)
matinvreport
cmatrixinverse
cmatrixluinverse
cmatrixtrinverse
hpdmatrixcholeskyinverse
hpdmatrixinverse
rmatrixinverse
rmatrixluinverse
rmatrixtrinverse
spdmatrixcholeskyinverse
spdmatrixinverse
matinv_d_c1 Complex matrix inverse
matinv_d_hpd1 HPD matrix inverse
matinv_d_r1 Real matrix inverse
matinv_d_spd1 SPD matrix inverse
/************************************************************************* Matrix inverse report: * R1 reciprocal of condition number in 1-norm * RInf reciprocal of condition number in inf-norm *************************************************************************/
public class matinvreport { public double r1; public double rinf; }
/************************************************************************* Inversion of a general matrix. Input parameters: A - matrix N - size of matrix A (optional) : * if given, only principal NxN submatrix is processed and overwritten. other elements are unchanged. * if not given, size is automatically determined from matrix size (A must be square matrix) Output parameters: Info - return code, same as in RMatrixLUInverse Rep - solver report, same as in RMatrixLUInverse A - inverse of matrix A, same as in RMatrixLUInverse -- ALGLIB -- Copyright 2005 by Bochkanov Sergey *************************************************************************/
public static void cmatrixinverse( ref complex[,] a, out int info, out matinvreport rep) public static void cmatrixinverse( ref complex[,] a, int n, out int info, out matinvreport rep)

Examples:   [1]  

/************************************************************************* Inversion of a matrix given by its LU decomposition. INPUT PARAMETERS: A - LU decomposition of the matrix (output of CMatrixLU subroutine). Pivots - table of permutations (the output of CMatrixLU subroutine). N - size of matrix A (optional) : * if given, only principal NxN submatrix is processed and overwritten. other elements are unchanged. * if not given, size is automatically determined from matrix size (A must be square matrix) OUTPUT PARAMETERS: Info - return code, same as in RMatrixLUInverse Rep - solver report, same as in RMatrixLUInverse A - inverse of matrix A, same as in RMatrixLUInverse -- ALGLIB routine -- 05.02.2010 Bochkanov Sergey *************************************************************************/
public static void cmatrixluinverse( ref complex[,] a, int[] pivots, out int info, out matinvreport rep) public static void cmatrixluinverse( ref complex[,] a, int[] pivots, int n, out int info, out matinvreport rep)
/************************************************************************* Triangular matrix inverse (complex) The subroutine inverts the following types of matrices: * upper triangular * upper triangular with unit diagonal * lower triangular * lower triangular with unit diagonal In case of an upper (lower) triangular matrix, the inverse matrix will also be upper (lower) triangular, and after the end of the algorithm, the inverse matrix replaces the source matrix. The elements below (above) the main diagonal are not changed by the algorithm. If the matrix has a unit diagonal, the inverse matrix also has a unit diagonal, and the diagonal elements are not passed to the algorithm. Input parameters: A - matrix, array[0..N-1, 0..N-1]. N - size of matrix A (optional) : * if given, only principal NxN submatrix is processed and overwritten. other elements are unchanged. * if not given, size is automatically determined from matrix size (A must be square matrix) IsUpper - True, if the matrix is upper triangular. IsUnit - diagonal type (optional): * if True, matrix has unit diagonal (a[i,i] are NOT used) * if False, matrix diagonal is arbitrary * if not given, False is assumed Output parameters: Info - same as for RMatrixLUInverse Rep - same as for RMatrixLUInverse A - same as for RMatrixLUInverse. -- ALGLIB -- Copyright 05.02.2010 by Bochkanov Sergey *************************************************************************/
public static void cmatrixtrinverse( ref complex[,] a, bool isupper, out int info, out matinvreport rep) public static void cmatrixtrinverse( ref complex[,] a, int n, bool isupper, bool isunit, out int info, out matinvreport rep)
/************************************************************************* Inversion of a Hermitian positive definite matrix which is given by Cholesky decomposition. Input parameters: A - Cholesky decomposition of the matrix to be inverted: A=U’*U or A = L*L'. Output of HPDMatrixCholesky subroutine. N - size of matrix A (optional) : * if given, only principal NxN submatrix is processed and overwritten. other elements are unchanged. * if not given, size is automatically determined from matrix size (A must be square matrix) IsUpper - storage type (optional): * if True, symmetric matrix A is given by its upper triangle, and the lower triangle isn’t used/changed by function * if False, symmetric matrix A is given by its lower triangle, and the upper triangle isn’t used/changed by function * if not given, lower half is used. Output parameters: Info - return code, same as in RMatrixLUInverse Rep - solver report, same as in RMatrixLUInverse A - inverse of matrix A, same as in RMatrixLUInverse -- ALGLIB routine -- 10.02.2010 Bochkanov Sergey *************************************************************************/
public static void hpdmatrixcholeskyinverse( ref complex[,] a, out int info, out matinvreport rep) public static void hpdmatrixcholeskyinverse( ref complex[,] a, int n, bool isupper, out int info, out matinvreport rep)
/************************************************************************* Inversion of a Hermitian positive definite matrix. Given an upper or lower triangle of a Hermitian positive definite matrix, the algorithm generates matrix A^-1 and saves the upper or lower triangle depending on the input. Input parameters: A - matrix to be inverted (upper or lower triangle). Array with elements [0..N-1,0..N-1]. N - size of matrix A (optional) : * if given, only principal NxN submatrix is processed and overwritten. other elements are unchanged. * if not given, size is automatically determined from matrix size (A must be square matrix) IsUpper - storage type (optional): * if True, symmetric matrix A is given by its upper triangle, and the lower triangle isn’t used/changed by function * if False, symmetric matrix A is given by its lower triangle, and the upper triangle isn’t used/changed by function * if not given, both lower and upper triangles must be filled. Output parameters: Info - return code, same as in RMatrixLUInverse Rep - solver report, same as in RMatrixLUInverse A - inverse of matrix A, same as in RMatrixLUInverse -- ALGLIB routine -- 10.02.2010 Bochkanov Sergey *************************************************************************/
public static void hpdmatrixinverse( ref complex[,] a, out int info, out matinvreport rep) public static void hpdmatrixinverse( ref complex[,] a, int n, bool isupper, out int info, out matinvreport rep)

Examples:   [1]  

/************************************************************************* Inversion of a general matrix. Input parameters: A - matrix. N - size of matrix A (optional) : * if given, only principal NxN submatrix is processed and overwritten. other elements are unchanged. * if not given, size is automatically determined from matrix size (A must be square matrix) Output parameters: Info - return code, same as in RMatrixLUInverse Rep - solver report, same as in RMatrixLUInverse A - inverse of matrix A, same as in RMatrixLUInverse Result: True, if the matrix is not singular. False, if the matrix is singular. -- ALGLIB -- Copyright 2005-2010 by Bochkanov Sergey *************************************************************************/
public static void rmatrixinverse( ref double[,] a, out int info, out matinvreport rep) public static void rmatrixinverse( ref double[,] a, int n, out int info, out matinvreport rep)

Examples:   [1]  

/************************************************************************* Inversion of a matrix given by its LU decomposition. INPUT PARAMETERS: A - LU decomposition of the matrix (output of RMatrixLU subroutine). Pivots - table of permutations (the output of RMatrixLU subroutine). N - size of matrix A (optional) : * if given, only principal NxN submatrix is processed and overwritten. other elements are unchanged. * if not given, size is automatically determined from matrix size (A must be square matrix) OUTPUT PARAMETERS: Info - return code: * -3 A is singular, or VERY close to singular. it is filled by zeros in such cases. * 1 task is solved (but matrix A may be ill-conditioned, check R1/RInf parameters for condition numbers). Rep - solver report, see below for more info A - inverse of matrix A. Array whose indexes range within [0..N-1, 0..N-1]. SOLVER REPORT Subroutine sets following fields of the Rep structure: * R1 reciprocal of condition number: 1/cond(A), 1-norm. * RInf reciprocal of condition number: 1/cond(A), inf-norm. -- ALGLIB routine -- 05.02.2010 Bochkanov Sergey *************************************************************************/
public static void rmatrixluinverse( ref double[,] a, int[] pivots, out int info, out matinvreport rep) public static void rmatrixluinverse( ref double[,] a, int[] pivots, int n, out int info, out matinvreport rep)
/************************************************************************* Triangular matrix inverse (real) The subroutine inverts the following types of matrices: * upper triangular * upper triangular with unit diagonal * lower triangular * lower triangular with unit diagonal In case of an upper (lower) triangular matrix, the inverse matrix will also be upper (lower) triangular, and after the end of the algorithm, the inverse matrix replaces the source matrix. The elements below (above) the main diagonal are not changed by the algorithm. If the matrix has a unit diagonal, the inverse matrix also has a unit diagonal, and the diagonal elements are not passed to the algorithm. Input parameters: A - matrix, array[0..N-1, 0..N-1]. N - size of matrix A (optional) : * if given, only principal NxN submatrix is processed and overwritten. other elements are unchanged. * if not given, size is automatically determined from matrix size (A must be square matrix) IsUpper - True, if the matrix is upper triangular. IsUnit - diagonal type (optional): * if True, matrix has unit diagonal (a[i,i] are NOT used) * if False, matrix diagonal is arbitrary * if not given, False is assumed Output parameters: Info - same as for RMatrixLUInverse Rep - same as for RMatrixLUInverse A - same as for RMatrixLUInverse. -- ALGLIB -- Copyright 05.02.2010 by Bochkanov Sergey *************************************************************************/
public static void rmatrixtrinverse( ref double[,] a, bool isupper, out int info, out matinvreport rep) public static void rmatrixtrinverse( ref double[,] a, int n, bool isupper, bool isunit, out int info, out matinvreport rep)
/************************************************************************* Inversion of a symmetric positive definite matrix which is given by Cholesky decomposition. Input parameters: A - Cholesky decomposition of the matrix to be inverted: A=U’*U or A = L*L'. Output of SPDMatrixCholesky subroutine. N - size of matrix A (optional) : * if given, only principal NxN submatrix is processed and overwritten. other elements are unchanged. * if not given, size is automatically determined from matrix size (A must be square matrix) IsUpper - storage type (optional): * if True, symmetric matrix A is given by its upper triangle, and the lower triangle isn’t used/changed by function * if False, symmetric matrix A is given by its lower triangle, and the upper triangle isn’t used/changed by function * if not given, lower half is used. Output parameters: Info - return code, same as in RMatrixLUInverse Rep - solver report, same as in RMatrixLUInverse A - inverse of matrix A, same as in RMatrixLUInverse -- ALGLIB routine -- 10.02.2010 Bochkanov Sergey *************************************************************************/
public static void spdmatrixcholeskyinverse( ref double[,] a, out int info, out matinvreport rep) public static void spdmatrixcholeskyinverse( ref double[,] a, int n, bool isupper, out int info, out matinvreport rep)
/************************************************************************* Inversion of a symmetric positive definite matrix. Given an upper or lower triangle of a symmetric positive definite matrix, the algorithm generates matrix A^-1 and saves the upper or lower triangle depending on the input. Input parameters: A - matrix to be inverted (upper or lower triangle). Array with elements [0..N-1,0..N-1]. N - size of matrix A (optional) : * if given, only principal NxN submatrix is processed and overwritten. other elements are unchanged. * if not given, size is automatically determined from matrix size (A must be square matrix) IsUpper - storage type (optional): * if True, symmetric matrix A is given by its upper triangle, and the lower triangle isn’t used/changed by function * if False, symmetric matrix A is given by its lower triangle, and the upper triangle isn’t used/changed by function * if not given, both lower and upper triangles must be filled. Output parameters: Info - return code, same as in RMatrixLUInverse Rep - solver report, same as in RMatrixLUInverse A - inverse of matrix A, same as in RMatrixLUInverse -- ALGLIB routine -- 10.02.2010 Bochkanov Sergey *************************************************************************/
public static void spdmatrixinverse( ref double[,] a, out int info, out matinvreport rep) public static void spdmatrixinverse( ref double[,] a, int n, bool isupper, out int info, out matinvreport rep)

Examples:   [1]  


public static int Main(string[] args)
{
    alglib.complex[,] a = new alglib.complex[,]{{new alglib.complex(0,1),-1},{new alglib.complex(0,1),1}};
    int info;
    alglib.matinvreport rep;
    alglib.cmatrixinverse(ref a, out info, out rep);
    System.Console.WriteLine("{0}", info); // EXPECTED: 1
    System.Console.WriteLine("{0}", alglib.ap.format(a,4)); // EXPECTED: [[-0.5i,-0.5i],[-0.5,0.5]]
    System.Console.WriteLine("{0:F4", rep.r1); // EXPECTED: 0.5
    System.Console.WriteLine("{0:F4", rep.rinf); // EXPECTED: 0.5
    System.Console.ReadLine();
    return 0;
}



public static int Main(string[] args)
{
    alglib.complex[,] a = new alglib.complex[,]{{2,1},{1,2}};
    int info;
    alglib.matinvreport rep;
    alglib.hpdmatrixinverse(ref a, out info, out rep);
    System.Console.WriteLine("{0}", info); // EXPECTED: 1
    System.Console.WriteLine("{0}", alglib.ap.format(a,4)); // EXPECTED: [[0.666666,-0.333333],[-0.333333,0.666666]]
    System.Console.ReadLine();
    return 0;
}



public static int Main(string[] args)
{
    double[,] a = new double[,]{{1,-1},{1,1}};
    int info;
    alglib.matinvreport rep;
    alglib.rmatrixinverse(ref a, out info, out rep);
    System.Console.WriteLine("{0}", info); // EXPECTED: 1
    System.Console.WriteLine("{0}", alglib.ap.format(a,4)); // EXPECTED: [[0.5,0.5],[-0.5,0.5]]
    System.Console.WriteLine("{0:F4", rep.r1); // EXPECTED: 0.5
    System.Console.WriteLine("{0:F4", rep.rinf); // EXPECTED: 0.5
    System.Console.ReadLine();
    return 0;
}



public static int Main(string[] args)
{
    double[,] a = new double[,]{{2,1},{1,2}};
    int info;
    alglib.matinvreport rep;
    alglib.spdmatrixinverse(ref a, out info, out rep);
    System.Console.WriteLine("{0}", info); // EXPECTED: 1
    System.Console.WriteLine("{0}", alglib.ap.format(a,4)); // EXPECTED: [[0.666666,-0.333333],[-0.333333,0.666666]]
    System.Console.ReadLine();
    return 0;
}


mcpdreport
mcpdstate
mcpdaddbc
mcpdaddec
mcpdaddtrack
mcpdcreate
mcpdcreateentry
mcpdcreateentryexit
mcpdcreateexit
mcpdresults
mcpdsetbc
mcpdsetec
mcpdsetlc
mcpdsetpredictionweights
mcpdsetprior
mcpdsettikhonovregularizer
mcpdsolve
mcpd_simple1 Simple unconstrained MCPD model (no entry/exit states)
mcpd_simple2 Simple MCPD model (no entry/exit states) with equality constraints
/************************************************************************* This structure is a MCPD training report: InnerIterationsCount - number of inner iterations of the underlying optimization algorithm OuterIterationsCount - number of outer iterations of the underlying optimization algorithm NFEV - number of merit function evaluations TerminationType - termination type (same as for MinBLEIC optimizer, positive values denote success, negative ones - failure) -- ALGLIB -- Copyright 23.05.2010 by Bochkanov Sergey *************************************************************************/
public class mcpdreport { public int inneriterationscount; public int outeriterationscount; public int nfev; public int terminationtype; }
/************************************************************************* This structure is a MCPD (Markov Chains for Population Data) solver. You should use ALGLIB functions in order to work with this object. -- ALGLIB -- Copyright 23.05.2010 by Bochkanov Sergey *************************************************************************/
public class mcpdstate { }
/************************************************************************* This function is used to add bound constraints on the elements of the transition matrix P. MCPD solver has four types of constraints which can be placed on P: * user-specified equality constraints (optional) * user-specified bound constraints (optional) * user-specified general linear constraints (optional) * basic constraints (always present): * non-negativity: P[i,j]>=0 * consistency: every column of P sums to 1.0 Final constraints which are passed to the underlying optimizer are calculated as intersection of all present constraints. For example, you may specify boundary constraint on P[0,0] and equality one: 0.1<=P[0,0]<=0.9 P[0,0]=0.5 Such combination of constraints will be silently reduced to their intersection, which is P[0,0]=0.5. This function can be used to ADD bound constraint for one element of P without changing constraints for other elements. You can also use MCPDSetBC() function which allows to place bound constraints on arbitrary subset of elements of P. Set of constraints is specified by BndL/BndU matrices, which may contain arbitrary combination of finite numbers or infinities (like -INF<x<=0.5 or 0.1<=x<+INF). These functions (MCPDSetBC and MCPDAddBC) interact as follows: * there is internal matrix of bound constraints which is stored in the MCPD solver * MCPDSetBC() replaces this matrix by another one (SET) * MCPDAddBC() modifies one element of this matrix and leaves other ones unchanged (ADD) * thus MCPDAddBC() call preserves all modifications done by previous calls, while MCPDSetBC() completely discards all changes done to the equality constraints. INPUT PARAMETERS: S - solver I - row index of element being constrained J - column index of element being constrained BndL - lower bound BndU - upper bound -- ALGLIB -- Copyright 23.05.2010 by Bochkanov Sergey *************************************************************************/
public static void mcpdaddbc( mcpdstate s, int i, int j, double bndl, double bndu)
/************************************************************************* This function is used to add equality constraints on the elements of the transition matrix P. MCPD solver has four types of constraints which can be placed on P: * user-specified equality constraints (optional) * user-specified bound constraints (optional) * user-specified general linear constraints (optional) * basic constraints (always present): * non-negativity: P[i,j]>=0 * consistency: every column of P sums to 1.0 Final constraints which are passed to the underlying optimizer are calculated as intersection of all present constraints. For example, you may specify boundary constraint on P[0,0] and equality one: 0.1<=P[0,0]<=0.9 P[0,0]=0.5 Such combination of constraints will be silently reduced to their intersection, which is P[0,0]=0.5. This function can be used to ADD equality constraint for one element of P without changing constraints for other elements. You can also use MCPDSetEC() function which allows you to specify arbitrary set of equality constraints in one call. These functions (MCPDSetEC and MCPDAddEC) interact as follows: * there is internal matrix of equality constraints which is stored in the MCPD solver * MCPDSetEC() replaces this matrix by another one (SET) * MCPDAddEC() modifies one element of this matrix and leaves other ones unchanged (ADD) * thus MCPDAddEC() call preserves all modifications done by previous calls, while MCPDSetEC() completely discards all changes done to the equality constraints. INPUT PARAMETERS: S - solver I - row index of element being constrained J - column index of element being constrained C - value (constraint for P[I,J]). Can be either NAN (no constraint) or finite value from [0,1]. NOTES: 1. infinite values of C will lead to exception being thrown. Values less than 0.0 or greater than 1.0 will lead to error code being returned after call to MCPDSolve(). -- ALGLIB -- Copyright 23.05.2010 by Bochkanov Sergey *************************************************************************/
public static void mcpdaddec(mcpdstate s, int i, int j, double c)

Examples:   [1]  

/************************************************************************* This function is used to add a track - sequence of system states at the different moments of its evolution. You may add one or several tracks to the MCPD solver. In case you have several tracks, they won't overwrite each other. For example, if you pass two tracks, A1-A2-A3 (system at t=A+1, t=A+2 and t=A+3) and B1-B2-B3, then solver will try to model transitions from t=A+1 to t=A+2, t=A+2 to t=A+3, t=B+1 to t=B+2, t=B+2 to t=B+3. But it WONT mix these two tracks - i.e. it wont try to model transition from t=A+3 to t=B+1. INPUT PARAMETERS: S - solver XY - track, array[K,N]: * I-th row is a state at t=I * elements of XY must be non-negative (exception will be thrown on negative elements) K - number of points in a track * if given, only leading K rows of XY are used * if not given, automatically determined from size of XY NOTES: 1. Track may contain either proportional or population data: * with proportional data all rows of XY must sum to 1.0, i.e. we have proportions instead of absolute population values * with population data rows of XY contain population counts and generally do not sum to 1.0 (although they still must be non-negative) -- ALGLIB -- Copyright 23.05.2010 by Bochkanov Sergey *************************************************************************/
public static void mcpdaddtrack(mcpdstate s, double[,] xy) public static void mcpdaddtrack(mcpdstate s, double[,] xy, int k)

Examples:   [1]  [2]  

/************************************************************************* DESCRIPTION: This function creates MCPD (Markov Chains for Population Data) solver. This solver can be used to find transition matrix P for N-dimensional prediction problem where transition from X[i] to X[i+1] is modelled as X[i+1] = P*X[i] where X[i] and X[i+1] are N-dimensional population vectors (components of each X are non-negative), and P is a N*N transition matrix (elements of P are non-negative, each column sums to 1.0). Such models arise when when: * there is some population of individuals * individuals can have different states * individuals can transit from one state to another * population size is constant, i.e. there is no new individuals and no one leaves population * you want to model transitions of individuals from one state into another USAGE: Here we give very brief outline of the MCPD. We strongly recommend you to read examples in the ALGLIB Reference Manual and to read ALGLIB User Guide on data analysis which is available at http://www.alglib.net/dataanalysis/ 1. User initializes algorithm state with MCPDCreate() call 2. User adds one or more tracks - sequences of states which describe evolution of a system being modelled from different starting conditions 3. User may add optional boundary, equality and/or linear constraints on the coefficients of P by calling one of the following functions: * MCPDSetEC() to set equality constraints * MCPDSetBC() to set bound constraints * MCPDSetLC() to set linear constraints 4. Optionally, user may set custom weights for prediction errors (by default, algorithm assigns non-equal, automatically chosen weights for errors in the prediction of different components of X). It can be done with a call of MCPDSetPredictionWeights() function. 5. User calls MCPDSolve() function which takes algorithm state and pointer (delegate, etc.) to callback function which calculates F/G. 6. User calls MCPDResults() to get solution INPUT PARAMETERS: N - problem dimension, N>=1 OUTPUT PARAMETERS: State - structure stores algorithm state -- ALGLIB -- Copyright 23.05.2010 by Bochkanov Sergey *************************************************************************/
public static void mcpdcreate(int n, out mcpdstate s)

Examples:   [1]  [2]  

/************************************************************************* DESCRIPTION: This function is a specialized version of MCPDCreate() function, and we recommend you to read comments for this function for general information about MCPD solver. This function creates MCPD (Markov Chains for Population Data) solver for "Entry-state" model, i.e. model where transition from X[i] to X[i+1] is modelled as X[i+1] = P*X[i] where X[i] and X[i+1] are N-dimensional state vectors P is a N*N transition matrix and one selected component of X[] is called "entry" state and is treated in a special way: system state always transits from "entry" state to some another state system state can not transit from any state into "entry" state Such conditions basically mean that row of P which corresponds to "entry" state is zero. Such models arise when: * there is some population of individuals * individuals can have different states * individuals can transit from one state to another * population size is NOT constant - at every moment of time there is some (unpredictable) amount of "new" individuals, which can transit into one of the states at the next turn, but still no one leaves population * you want to model transitions of individuals from one state into another * but you do NOT want to predict amount of "new" individuals because it does not depends on individuals already present (hence system can not transit INTO entry state - it can only transit FROM it). This model is discussed in more details in the ALGLIB User Guide (see http://www.alglib.net/dataanalysis/ for more data). INPUT PARAMETERS: N - problem dimension, N>=2 EntryState- index of entry state, in 0..N-1 OUTPUT PARAMETERS: State - structure stores algorithm state -- ALGLIB -- Copyright 23.05.2010 by Bochkanov Sergey *************************************************************************/
public static void mcpdcreateentry(int n, int entrystate, out mcpdstate s)
/************************************************************************* DESCRIPTION: This function is a specialized version of MCPDCreate() function, and we recommend you to read comments for this function for general information about MCPD solver. This function creates MCPD (Markov Chains for Population Data) solver for "Entry-Exit-states" model, i.e. model where transition from X[i] to X[i+1] is modelled as X[i+1] = P*X[i] where X[i] and X[i+1] are N-dimensional state vectors P is a N*N transition matrix one selected component of X[] is called "entry" state and is treated in a special way: system state always transits from "entry" state to some another state system state can not transit from any state into "entry" state and another one component of X[] is called "exit" state and is treated in a special way too: system state can transit from any state into "exit" state system state can not transit from "exit" state into any other state transition operator discards "exit" state (makes it zero at each turn) Such conditions basically mean that: row of P which corresponds to "entry" state is zero column of P which corresponds to "exit" state is zero Multiplication by such P may decrease sum of vector components. Such models arise when: * there is some population of individuals * individuals can have different states * individuals can transit from one state to another * population size is NOT constant * at every moment of time there is some (unpredictable) amount of "new" individuals, which can transit into one of the states at the next turn * some individuals can move (predictably) into "exit" state and leave population at the next turn * you want to model transitions of individuals from one state into another, including transitions from the "entry" state and into the "exit" state. * but you do NOT want to predict amount of "new" individuals because it does not depends on individuals already present (hence system can not transit INTO entry state - it can only transit FROM it). This model is discussed in more details in the ALGLIB User Guide (see http://www.alglib.net/dataanalysis/ for more data). INPUT PARAMETERS: N - problem dimension, N>=2 EntryState- index of entry state, in 0..N-1 ExitState- index of exit state, in 0..N-1 OUTPUT PARAMETERS: State - structure stores algorithm state -- ALGLIB -- Copyright 23.05.2010 by Bochkanov Sergey *************************************************************************/
public static void mcpdcreateentryexit( int n, int entrystate, int exitstate, out mcpdstate s)
/************************************************************************* DESCRIPTION: This function is a specialized version of MCPDCreate() function, and we recommend you to read comments for this function for general information about MCPD solver. This function creates MCPD (Markov Chains for Population Data) solver for "Exit-state" model, i.e. model where transition from X[i] to X[i+1] is modelled as X[i+1] = P*X[i] where X[i] and X[i+1] are N-dimensional state vectors P is a N*N transition matrix and one selected component of X[] is called "exit" state and is treated in a special way: system state can transit from any state into "exit" state system state can not transit from "exit" state into any other state transition operator discards "exit" state (makes it zero at each turn) Such conditions basically mean that column of P which corresponds to "exit" state is zero. Multiplication by such P may decrease sum of vector components. Such models arise when: * there is some population of individuals * individuals can have different states * individuals can transit from one state to another * population size is NOT constant - individuals can move into "exit" state and leave population at the next turn, but there are no new individuals * amount of individuals which leave population can be predicted * you want to model transitions of individuals from one state into another (including transitions into the "exit" state) This model is discussed in more details in the ALGLIB User Guide (see http://www.alglib.net/dataanalysis/ for more data). INPUT PARAMETERS: N - problem dimension, N>=2 ExitState- index of exit state, in 0..N-1 OUTPUT PARAMETERS: State - structure stores algorithm state -- ALGLIB -- Copyright 23.05.2010 by Bochkanov Sergey *************************************************************************/
public static void mcpdcreateexit(int n, int exitstate, out mcpdstate s)
/************************************************************************* MCPD results INPUT PARAMETERS: State - algorithm state OUTPUT PARAMETERS: P - array[N,N], transition matrix Rep - optimization report. You should check Rep.TerminationType in order to distinguish successful termination from unsuccessful one. Speaking short, positive values denote success, negative ones are failures. More information about fields of this structure can be found in the comments on MCPDReport datatype. -- ALGLIB -- Copyright 23.05.2010 by Bochkanov Sergey *************************************************************************/
public static void mcpdresults( mcpdstate s, out double[,] p, out mcpdreport rep)

Examples:   [1]  [2]  

/************************************************************************* This function is used to add bound constraints on the elements of the transition matrix P. MCPD solver has four types of constraints which can be placed on P: * user-specified equality constraints (optional) * user-specified bound constraints (optional) * user-specified general linear constraints (optional) * basic constraints (always present): * non-negativity: P[i,j]>=0 * consistency: every column of P sums to 1.0 Final constraints which are passed to the underlying optimizer are calculated as intersection of all present constraints. For example, you may specify boundary constraint on P[0,0] and equality one: 0.1<=P[0,0]<=0.9 P[0,0]=0.5 Such combination of constraints will be silently reduced to their intersection, which is P[0,0]=0.5. This function can be used to place bound constraints on arbitrary subset of elements of P. Set of constraints is specified by BndL/BndU matrices, which may contain arbitrary combination of finite numbers or infinities (like -INF<x<=0.5 or 0.1<=x<+INF). You can also use MCPDAddBC() function which allows to ADD bound constraint for one element of P without changing constraints for other elements. These functions (MCPDSetBC and MCPDAddBC) interact as follows: * there is internal matrix of bound constraints which is stored in the MCPD solver * MCPDSetBC() replaces this matrix by another one (SET) * MCPDAddBC() modifies one element of this matrix and leaves other ones unchanged (ADD) * thus MCPDAddBC() call preserves all modifications done by previous calls, while MCPDSetBC() completely discards all changes done to the equality constraints. INPUT PARAMETERS: S - solver BndL - lower bounds constraints, array[N,N]. Elements of BndL can be finite numbers or -INF. BndU - upper bounds constraints, array[N,N]. Elements of BndU can be finite numbers or +INF. -- ALGLIB -- Copyright 23.05.2010 by Bochkanov Sergey *************************************************************************/
public static void mcpdsetbc(mcpdstate s, double[,] bndl, double[,] bndu)
/************************************************************************* This function is used to add equality constraints on the elements of the transition matrix P. MCPD solver has four types of constraints which can be placed on P: * user-specified equality constraints (optional) * user-specified bound constraints (optional) * user-specified general linear constraints (optional) * basic constraints (always present): * non-negativity: P[i,j]>=0 * consistency: every column of P sums to 1.0 Final constraints which are passed to the underlying optimizer are calculated as intersection of all present constraints. For example, you may specify boundary constraint on P[0,0] and equality one: 0.1<=P[0,0]<=0.9 P[0,0]=0.5 Such combination of constraints will be silently reduced to their intersection, which is P[0,0]=0.5. This function can be used to place equality constraints on arbitrary subset of elements of P. Set of constraints is specified by EC, which may contain either NAN's or finite numbers from [0,1]. NAN denotes absence of constraint, finite number denotes equality constraint on specific element of P. You can also use MCPDAddEC() function which allows to ADD equality constraint for one element of P without changing constraints for other elements. These functions (MCPDSetEC and MCPDAddEC) interact as follows: * there is internal matrix of equality constraints which is stored in the MCPD solver * MCPDSetEC() replaces this matrix by another one (SET) * MCPDAddEC() modifies one element of this matrix and leaves other ones unchanged (ADD) * thus MCPDAddEC() call preserves all modifications done by previous calls, while MCPDSetEC() completely discards all changes done to the equality constraints. INPUT PARAMETERS: S - solver EC - equality constraints, array[N,N]. Elements of EC can be either NAN's or finite numbers from [0,1]. NAN denotes absence of constraints, while finite value denotes equality constraint on the corresponding element of P. NOTES: 1. infinite values of EC will lead to exception being thrown. Values less than 0.0 or greater than 1.0 will lead to error code being returned after call to MCPDSolve(). -- ALGLIB -- Copyright 23.05.2010 by Bochkanov Sergey *************************************************************************/
public static void mcpdsetec(mcpdstate s, double[,] ec)
/************************************************************************* This function is used to set linear equality/inequality constraints on the elements of the transition matrix P. This function can be used to set one or several general linear constraints on the elements of P. Two types of constraints are supported: * equality constraints * inequality constraints (both less-or-equal and greater-or-equal) Coefficients of constraints are specified by matrix C (one of the parameters). One row of C corresponds to one constraint. Because transition matrix P has N*N elements, we need N*N columns to store all coefficients (they are stored row by row), and one more column to store right part - hence C has N*N+1 columns. Constraint kind is stored in the CT array. Thus, I-th linear constraint is P[0,0]*C[I,0] + P[0,1]*C[I,1] + .. + P[0,N-1]*C[I,N-1] + + P[1,0]*C[I,N] + P[1,1]*C[I,N+1] + ... + + P[N-1,N-1]*C[I,N*N-1] ?=? C[I,N*N] where ?=? can be either "=" (CT[i]=0), "<=" (CT[i]<0) or ">=" (CT[i]>0). Your constraint may involve only some subset of P (less than N*N elements). For example it can be something like P[0,0] + P[0,1] = 0.5 In this case you still should pass matrix with N*N+1 columns, but all its elements (except for C[0,0], C[0,1] and C[0,N*N-1]) will be zero. INPUT PARAMETERS: S - solver C - array[K,N*N+1] - coefficients of constraints (see above for complete description) CT - array[K] - constraint types (see above for complete description) K - number of equality/inequality constraints, K>=0: * if given, only leading K elements of C/CT are used * if not given, automatically determined from sizes of C/CT -- ALGLIB -- Copyright 23.05.2010 by Bochkanov Sergey *************************************************************************/
public static void mcpdsetlc(mcpdstate s, double[,] c, int[] ct) public static void mcpdsetlc(mcpdstate s, double[,] c, int[] ct, int k)
/************************************************************************* This function is used to change prediction weights MCPD solver scales prediction errors as follows Error(P) = ||W*(y-P*x)||^2 where x is a system state at time t y is a system state at time t+1 P is a transition matrix W is a diagonal scaling matrix By default, weights are chosen in order to minimize relative prediction error instead of absolute one. For example, if one component of state is about 0.5 in magnitude and another one is about 0.05, then algorithm will make corresponding weights equal to 2.0 and 20.0. INPUT PARAMETERS: S - solver PW - array[N], weights: * must be non-negative values (exception will be thrown otherwise) * zero values will be replaced by automatically chosen values -- ALGLIB -- Copyright 23.05.2010 by Bochkanov Sergey *************************************************************************/
public static void mcpdsetpredictionweights(mcpdstate s, double[] pw)
/************************************************************************* This function allows to set prior values used for regularization of your problem. By default, regularizing term is equal to r*||P-prior_P||^2, where r is a small non-zero value, P is transition matrix, prior_P is identity matrix, ||X||^2 is a sum of squared elements of X. This function allows you to change prior values prior_P. You can also change r with MCPDSetTikhonovRegularizer() function. INPUT PARAMETERS: S - solver PP - array[N,N], matrix of prior values: 1. elements must be real numbers from [0,1] 2. columns must sum to 1.0. First property is checked (exception is thrown otherwise), while second one is not checked/enforced. -- ALGLIB -- Copyright 23.05.2010 by Bochkanov Sergey *************************************************************************/
public static void mcpdsetprior(mcpdstate s, double[,] pp)
/************************************************************************* This function allows to tune amount of Tikhonov regularization being applied to your problem. By default, regularizing term is equal to r*||P-prior_P||^2, where r is a small non-zero value, P is transition matrix, prior_P is identity matrix, ||X||^2 is a sum of squared elements of X. This function allows you to change coefficient r. You can also change prior values with MCPDSetPrior() function. INPUT PARAMETERS: S - solver V - regularization coefficient, finite non-negative value. It is not recommended to specify zero value unless you are pretty sure that you want it. -- ALGLIB -- Copyright 23.05.2010 by Bochkanov Sergey *************************************************************************/
public static void mcpdsettikhonovregularizer(mcpdstate s, double v)
/************************************************************************* This function is used to start solution of the MCPD problem. After return from this function, you can use MCPDResults() to get solution and completion code. -- ALGLIB -- Copyright 23.05.2010 by Bochkanov Sergey *************************************************************************/
public static void mcpdsolve(mcpdstate s)

Examples:   [1]  [2]  


public static int Main(string[] args)
{
    //
    // The very simple MCPD example
    //
    // We have a loan portfolio. Our loans can be in one of two states:
    // * normal loans ("good" ones)
    // * past due loans ("bad" ones)
    //
    // We assume that:
    // * loans can transition from any state to any other state. In 
    //   particular, past due loan can become "good" one at any moment 
    //   with same (fixed) probability. Not realistic, but it is toy example :)
    // * portfolio size does not change over time
    //
    // Thus, we have following model
    //     state_new = P*state_old
    // where
    //         ( p00  p01 )
    //     P = (          )
    //         ( p10  p11 )
    //
    // We want to model transitions between these two states using MCPD
    // approach (Markov Chains for Proportional/Population Data), i.e.
    // to restore hidden transition matrix P using actual portfolio data.
    // We have:
    // * poportional data, i.e. proportion of loans in the normal and past 
    //   due states (not portfolio size measured in some currency, although 
    //   it is possible to work with population data too)
    // * two tracks, i.e. two sequences which describe portfolio
    //   evolution from two different starting states: [1,0] (all loans 
    //   are "good") and [0.8,0.2] (only 80% of portfolio is in the "good"
    //   state)
    //
    alglib.mcpdstate s;
    alglib.mcpdreport rep;
    double[,] p;
    double[,] track0 = new double[,]{{1.00000,0.00000},{0.95000,0.05000},{0.92750,0.07250},{0.91738,0.08263},{0.91282,0.08718}};
    double[,] track1 = new double[,]{{0.80000,0.20000},{0.86000,0.14000},{0.88700,0.11300},{0.89915,0.10085}};

    alglib.mcpdcreate(2, out s);
    alglib.mcpdaddtrack(s, track0);
    alglib.mcpdaddtrack(s, track1);
    alglib.mcpdsolve(s);
    alglib.mcpdresults(s, out p, out rep);

    //
    // Hidden matrix P is equal to
    //         ( 0.95  0.50 )
    //         (            )
    //         ( 0.05  0.50 )
    // which means that "good" loans can become "bad" with 5% probability, 
    // while "bad" loans will return to good state with 50% probability.
    //
    System.Console.WriteLine("{0}", alglib.ap.format(p,2)); // EXPECTED: [[0.95,0.50],[0.05,0.50]]
    System.Console.ReadLine();
    return 0;
}



public static int Main(string[] args)
{
    //
    // Simple MCPD example
    //
    // We have a loan portfolio. Our loans can be in one of three states:
    // * normal loans
    // * past due loans
    // * charged off loans
    //
    // We assume that:
    // * normal loan can stay normal or become past due (but not charged off)
    // * past due loan can stay past due, become normal or charged off
    // * charged off loan will stay charged off for the rest of eternity
    // * portfolio size does not change over time
    // Not realistic, but it is toy example :)
    //
    // Thus, we have following model
    //     state_new = P*state_old
    // where
    //         ( p00  p01    )
    //     P = ( p10  p11    )
    //         (      p21  1 )
    // i.e. four elements of P are known a priori.
    //
    // Although it is possible (given enough data) to In order to enforce 
    // this property we set equality constraints on these elements.
    //
    // We want to model transitions between these two states using MCPD
    // approach (Markov Chains for Proportional/Population Data), i.e.
    // to restore hidden transition matrix P using actual portfolio data.
    // We have:
    // * poportional data, i.e. proportion of loans in the current and past 
    //   due states (not portfolio size measured in some currency, although 
    //   it is possible to work with population data too)
    // * two tracks, i.e. two sequences which describe portfolio
    //   evolution from two different starting states: [1,0,0] (all loans 
    //   are "good") and [0.8,0.2,0.0] (only 80% of portfolio is in the "good"
    //   state)
    //
    alglib.mcpdstate s;
    alglib.mcpdreport rep;
    double[,] p;
    double[,] track0 = new double[,]{{1.000000,0.000000,0.000000},{0.950000,0.050000,0.000000},{0.927500,0.060000,0.012500},{0.911125,0.061375,0.027500},{0.896256,0.060900,0.042844}};
    double[,] track1 = new double[,]{{0.800000,0.200000,0.000000},{0.860000,0.090000,0.050000},{0.862000,0.065500,0.072500},{0.851650,0.059475,0.088875},{0.838805,0.057451,0.103744}};

    alglib.mcpdcreate(3, out s);
    alglib.mcpdaddtrack(s, track0);
    alglib.mcpdaddtrack(s, track1);
    alglib.mcpdaddec(s, 0, 2, 0.0);
    alglib.mcpdaddec(s, 1, 2, 0.0);
    alglib.mcpdaddec(s, 2, 2, 1.0);
    alglib.mcpdaddec(s, 2, 0, 0.0);
    alglib.mcpdsolve(s);
    alglib.mcpdresults(s, out p, out rep);

    //
    // Hidden matrix P is equal to
    //         ( 0.95 0.50      )
    //         ( 0.05 0.25      )
    //         (      0.25 1.00 ) 
    // which means that "good" loans can become past due with 5% probability, 
    // while past due loans will become charged off with 25% probability or
    // return back to normal state with 50% probability.
    //
    System.Console.WriteLine("{0}", alglib.ap.format(p,2)); // EXPECTED: [[0.95,0.50,0.00],[0.05,0.25,0.00],[0.00,0.25,1.00]]
    System.Console.ReadLine();
    return 0;
}


minbleicreport
minbleicstate
minbleiccreate
minbleiccreatef
minbleicoptimize
minbleicrestartfrom
minbleicresults
minbleicresultsbuf
minbleicsetbc
minbleicsetinnercond
minbleicsetlc
minbleicsetmaxits
minbleicsetoutercond
minbleicsetprecdefault
minbleicsetprecdiag
minbleicsetprecscale
minbleicsetscale
minbleicsetstpmax
minbleicsetxrep
minbleic_d_1 Nonlinear optimization with bound constraints
minbleic_d_2 Nonlinear optimization with linear inequality constraints
minbleic_ftrim Nonlinear optimization by BLEIC, function with singularities
minbleic_numdiff Nonlinear optimization with bound constraints and numerical differentiation
/************************************************************************* This structure stores optimization report: * InnerIterationsCount number of inner iterations * OuterIterationsCount number of outer iterations * NFEV number of gradient evaluations * TerminationType termination type (see below) TERMINATION CODES TerminationType field contains completion code, which can be: -10 unsupported combination of algorithm settings: 1) StpMax is set to non-zero value, AND 2) non-default preconditioner is used. You can't use both features at the same moment, so you have to choose one of them (and to turn off another one). -3 inconsistent constraints. Feasible point is either nonexistent or too hard to find. Try to restart optimizer with better initial approximation 4 conditions on constraints are fulfilled with error less than or equal to EpsC 5 MaxIts steps was taken 7 stopping conditions are too stringent, further improvement is impossible, X contains best point found so far. ADDITIONAL FIELDS There are additional fields which can be used for debugging: * DebugEqErr error in the equality constraints (2-norm) * DebugFS f, calculated at projection of initial point to the feasible set * DebugFF f, calculated at the final point * DebugDX |X_start-X_final| *************************************************************************/
public class minbleicreport { public int inneriterationscount; public int outeriterationscount; public int nfev; public int terminationtype; public double debugeqerr; public double debugfs; public double debugff; public double debugdx; }
/************************************************************************* This object stores nonlinear optimizer state. You should use functions provided by MinBLEIC subpackage to work with this object *************************************************************************/
public class minbleicstate { }
/************************************************************************* BOUND CONSTRAINED OPTIMIZATION WITH ADDITIONAL LINEAR EQUALITY AND INEQUALITY CONSTRAINTS DESCRIPTION: The subroutine minimizes function F(x) of N arguments subject to any combination of: * bound constraints * linear inequality constraints * linear equality constraints REQUIREMENTS: * user must provide function value and gradient * starting point X0 must be feasible or not too far away from the feasible set * grad(f) must be Lipschitz continuous on a level set: L = { x : f(x)<=f(x0) } * function must be defined everywhere on the feasible set F USAGE: Constrained optimization if far more complex than the unconstrained one. Here we give very brief outline of the BLEIC optimizer. We strongly recommend you to read examples in the ALGLIB Reference Manual and to read ALGLIB User Guide on optimization, which is available at http://www.alglib.net/optimization/ 1. User initializes algorithm state with MinBLEICCreate() call 2. USer adds boundary and/or linear constraints by calling MinBLEICSetBC() and MinBLEICSetLC() functions. 3. User sets stopping conditions for underlying unconstrained solver with MinBLEICSetInnerCond() call. This function controls accuracy of underlying optimization algorithm. 4. User sets stopping conditions for outer iteration by calling MinBLEICSetOuterCond() function. This function controls handling of boundary and inequality constraints. 5. Additionally, user may set limit on number of internal iterations by MinBLEICSetMaxIts() call. This function allows to prevent algorithm from looping forever. 6. User calls MinBLEICOptimize() function which takes algorithm state and pointer (delegate, etc.) to callback function which calculates F/G. 7. User calls MinBLEICResults() to get solution 8. Optionally user may call MinBLEICRestartFrom() to solve another problem with same N but another starting point. MinBLEICRestartFrom() allows to reuse already initialized structure. INPUT PARAMETERS: N - problem dimension, N>0: * if given, only leading N elements of X are used * if not given, automatically determined from size ofX X - starting point, array[N]: * it is better to set X to a feasible point * but X can be infeasible, in which case algorithm will try to find feasible point first, using X as initial approximation. OUTPUT PARAMETERS: State - structure stores algorithm state -- ALGLIB -- Copyright 28.11.2010 by Bochkanov Sergey *************************************************************************/
public static void minbleiccreate(double[] x, out minbleicstate state) public static void minbleiccreate( int n, double[] x, out minbleicstate state)

Examples:   [1]  [2]  [3]  

/************************************************************************* The subroutine is finite difference variant of MinBLEICCreate(). It uses finite differences in order to differentiate target function. Description below contains information which is specific to this function only. We recommend to read comments on MinBLEICCreate() in order to get more information about creation of BLEIC optimizer. INPUT PARAMETERS: N - problem dimension, N>0: * if given, only leading N elements of X are used * if not given, automatically determined from size of X X - starting point, array[0..N-1]. DiffStep- differentiation step, >0 OUTPUT PARAMETERS: State - structure which stores algorithm state NOTES: 1. algorithm uses 4-point central formula for differentiation. 2. differentiation step along I-th axis is equal to DiffStep*S[I] where S[] is scaling vector which can be set by MinBLEICSetScale() call. 3. we recommend you to use moderate values of differentiation step. Too large step will result in too large truncation errors, while too small step will result in too large numerical errors. 1.0E-6 can be good value to start with. 4. Numerical differentiation is very inefficient - one gradient calculation needs 4*N function evaluations. This function will work for any N - either small (1...10), moderate (10...100) or large (100...). However, performance penalty will be too severe for any N's except for small ones. We should also say that code which relies on numerical differentiation is less robust and precise. CG needs exact gradient values. Imprecise gradient may slow down convergence, especially on highly nonlinear problems. Thus we recommend to use this function for fast prototyping on small- dimensional problems only, and to implement analytical gradient as soon as possible. -- ALGLIB -- Copyright 16.05.2011 by Bochkanov Sergey *************************************************************************/
public static void minbleiccreatef( double[] x, double diffstep, out minbleicstate state) public static void minbleiccreatef( int n, double[] x, double diffstep, out minbleicstate state)

Examples:   [1]  

/************************************************************************* This family of functions is used to launcn iterations of nonlinear optimizer These functions accept following parameters: state - algorithm state func - callback which calculates function (or merit function) value func at given point x grad - callback which calculates function (or merit function) value func and gradient grad at given point x rep - optional callback which is called after each iteration can be NULL ptr - optional pointer which is passed to func/grad/hess/jac/rep can be NULL NOTES: 1. This function has two different implementations: one which uses exact (analytical) user-supplied gradient, and one which uses function value only and numerically differentiates function in order to obtain gradient. Depending on the specific function used to create optimizer object (either MinBLEICCreate() for analytical gradient or MinBLEICCreateF() for numerical differentiation) you should choose appropriate variant of MinBLEICOptimize() - one which accepts function AND gradient or one which accepts function ONLY. Be careful to choose variant of MinBLEICOptimize() which corresponds to your optimization scheme! Table below lists different combinations of callback (function/gradient) passed to MinBLEICOptimize() and specific function used to create optimizer. | USER PASSED TO MinBLEICOptimize() CREATED WITH | function only | function and gradient ------------------------------------------------------------ MinBLEICCreateF() | work FAIL MinBLEICCreate() | FAIL work Here "FAIL" denotes inappropriate combinations of optimizer creation function and MinBLEICOptimize() version. Attemps to use such combination (for example, to create optimizer with MinBLEICCreateF() and to pass gradient information to MinCGOptimize()) will lead to exception being thrown. Either you did not pass gradient when it WAS needed or you passed gradient when it was NOT needed. -- ALGLIB -- Copyright 28.11.2010 by Bochkanov Sergey *************************************************************************/
public static void minbleicoptimize(minbleicstate state, ndimensional_func func, ndimensional_rep rep, object obj) public static void minbleicoptimize(minbleicstate state, ndimensional_grad grad, ndimensional_rep rep, object obj)

Examples:   [1]  [2]  [3]  [4]  

/************************************************************************* This subroutine restarts algorithm from new point. All optimization parameters (including constraints) are left unchanged. This function allows to solve multiple optimization problems (which must have same number of dimensions) without object reallocation penalty. INPUT PARAMETERS: State - structure previously allocated with MinBLEICCreate call. X - new starting point. -- ALGLIB -- Copyright 28.11.2010 by Bochkanov Sergey *************************************************************************/
public static void minbleicrestartfrom(minbleicstate state, double[] x)
/************************************************************************* BLEIC results INPUT PARAMETERS: State - algorithm state OUTPUT PARAMETERS: X - array[0..N-1], solution Rep - optimization report. You should check Rep.TerminationType in order to distinguish successful termination from unsuccessful one. More information about fields of this structure can be found in the comments on MinBLEICReport datatype. -- ALGLIB -- Copyright 28.11.2010 by Bochkanov Sergey *************************************************************************/
public static void minbleicresults( minbleicstate state, out double[] x, out minbleicreport rep)

Examples:   [1]  [2]  [3]  [4]  

/************************************************************************* BLEIC results Buffered implementation of MinBLEICResults() which uses pre-allocated buffer to store X[]. If buffer size is too small, it resizes buffer. It is intended to be used in the inner cycles of performance critical algorithms where array reallocation penalty is too large to be ignored. -- ALGLIB -- Copyright 28.11.2010 by Bochkanov Sergey *************************************************************************/
public static void minbleicresultsbuf( minbleicstate state, ref double[] x, minbleicreport rep)
/************************************************************************* This function sets boundary constraints for BLEIC optimizer. Boundary constraints are inactive by default (after initial creation). They are preserved after algorithm restart with MinBLEICRestartFrom(). INPUT PARAMETERS: State - structure stores algorithm state BndL - lower bounds, array[N]. If some (all) variables are unbounded, you may specify very small number or -INF. BndU - upper bounds, array[N]. If some (all) variables are unbounded, you may specify very large number or +INF. NOTE 1: it is possible to specify BndL[i]=BndU[i]. In this case I-th variable will be "frozen" at X[i]=BndL[i]=BndU[i]. NOTE 2: this solver has following useful properties: * bound constraints are always satisfied exactly * function is evaluated only INSIDE area specified by bound constraints, even when numerical differentiation is used (algorithm adjusts nodes according to boundary constraints) -- ALGLIB -- Copyright 28.11.2010 by Bochkanov Sergey *************************************************************************/
public static void minbleicsetbc( minbleicstate state, double[] bndl, double[] bndu)

Examples:   [1]  [2]  

/************************************************************************* This function sets stopping conditions for the underlying nonlinear CG optimizer. It controls overall accuracy of solution. These conditions should be strict enough in order for algorithm to converge. INPUT PARAMETERS: State - structure which stores algorithm state EpsG - >=0 The subroutine finishes its work if the condition |v|<EpsG is satisfied, where: * |.| means Euclidian norm * v - scaled gradient vector, v[i]=g[i]*s[i] * g - gradient * s - scaling coefficients set by MinBLEICSetScale() EpsF - >=0 The subroutine finishes its work if on k+1-th iteration the condition |F(k+1)-F(k)|<=EpsF*max{|F(k)|,|F(k+1)|,1} is satisfied. EpsX - >=0 The subroutine finishes its work if on k+1-th iteration the condition |v|<=EpsX is fulfilled, where: * |.| means Euclidian norm * v - scaled step vector, v[i]=dx[i]/s[i] * dx - ste pvector, dx=X(k+1)-X(k) * s - scaling coefficients set by MinBLEICSetScale() Passing EpsG=0, EpsF=0 and EpsX=0 (simultaneously) will lead to automatic stopping criterion selection. These conditions are used to terminate inner iterations. However, you need to tune termination conditions for outer iterations too. -- ALGLIB -- Copyright 28.11.2010 by Bochkanov Sergey *************************************************************************/
public static void minbleicsetinnercond( minbleicstate state, double epsg, double epsf, double epsx)

Examples:   [1]  [2]  [3]  

/************************************************************************* This function sets linear constraints for BLEIC optimizer. Linear constraints are inactive by default (after initial creation). They are preserved after algorithm restart with MinBLEICRestartFrom(). INPUT PARAMETERS: State - structure previously allocated with MinBLEICCreate call. C - linear constraints, array[K,N+1]. Each row of C represents one constraint, either equality or inequality (see below): * first N elements correspond to coefficients, * last element corresponds to the right part. All elements of C (including right part) must be finite. CT - type of constraints, array[K]: * if CT[i]>0, then I-th constraint is C[i,*]*x >= C[i,n+1] * if CT[i]=0, then I-th constraint is C[i,*]*x = C[i,n+1] * if CT[i]<0, then I-th constraint is C[i,*]*x <= C[i,n+1] K - number of equality/inequality constraints, K>=0: * if given, only leading K elements of C/CT are used * if not given, automatically determined from sizes of C/CT NOTE 1: linear (non-bound) constraints are satisfied only approximately: * there always exists some minor violation (about Epsilon in magnitude) due to rounding errors * numerical differentiation, if used, may lead to function evaluations outside of the feasible area, because algorithm does NOT change numerical differentiation formula according to linear constraints. If you want constraints to be satisfied exactly, try to reformulate your problem in such manner that all constraints will become boundary ones (this kind of constraints is always satisfied exactly, both in the final solution and in all intermediate points). -- ALGLIB -- Copyright 28.11.2010 by Bochkanov Sergey *************************************************************************/
public static void minbleicsetlc( minbleicstate state, double[,] c, int[] ct) public static void minbleicsetlc( minbleicstate state, double[,] c, int[] ct, int k)

Examples:   [1]  

/************************************************************************* This function allows to stop algorithm after specified number of inner iterations. INPUT PARAMETERS: State - structure which stores algorithm state MaxIts - maximum number of inner iterations. If MaxIts=0, the number of iterations is unlimited. -- ALGLIB -- Copyright 28.11.2010 by Bochkanov Sergey *************************************************************************/
public static void minbleicsetmaxits(minbleicstate state, int maxits)
/************************************************************************* This function sets stopping conditions for outer iteration of BLEIC algo. These conditions control accuracy of constraint handling and amount of infeasibility allowed in the solution. INPUT PARAMETERS: State - structure which stores algorithm state EpsX - >0, stopping condition on outer iteration step length EpsI - >0, stopping condition on infeasibility Both EpsX and EpsI must be non-zero. MEANING OF EpsX EpsX is a stopping condition for outer iterations. Algorithm will stop when solution of the current modified subproblem will be within EpsX (using 2-norm) of the previous solution. MEANING OF EpsI EpsI controls feasibility properties - algorithm won't stop until all inequality constraints will be satisfied with error (distance from current point to the feasible area) at most EpsI. -- ALGLIB -- Copyright 28.11.2010 by Bochkanov Sergey *************************************************************************/
public static void minbleicsetoutercond( minbleicstate state, double epsx, double epsi)

Examples:   [1]  [2]  [3]  

/************************************************************************* Modification of the preconditioner: preconditioning is turned off. INPUT PARAMETERS: State - structure which stores algorithm state -- ALGLIB -- Copyright 13.10.2010 by Bochkanov Sergey *************************************************************************/
public static void minbleicsetprecdefault(minbleicstate state)
/************************************************************************* Modification of the preconditioner: diagonal of approximate Hessian is used. INPUT PARAMETERS: State - structure which stores algorithm state D - diagonal of the approximate Hessian, array[0..N-1], (if larger, only leading N elements are used). NOTE 1: D[i] should be positive. Exception will be thrown otherwise. NOTE 2: you should pass diagonal of approximate Hessian - NOT ITS INVERSE. -- ALGLIB -- Copyright 13.10.2010 by Bochkanov Sergey *************************************************************************/
public static void minbleicsetprecdiag(minbleicstate state, double[] d)
/************************************************************************* Modification of the preconditioner: scale-based diagonal preconditioning. This preconditioning mode can be useful when you don't have approximate diagonal of Hessian, but you know that your variables are badly scaled (for example, one variable is in [1,10], and another in [1000,100000]), and most part of the ill-conditioning comes from different scales of vars. In this case simple scale-based preconditioner, with H[i] = 1/(s[i]^2), can greatly improve convergence. IMPRTANT: you should set scale of your variables with MinBLEICSetScale() call (before or after MinBLEICSetPrecScale() call). Without knowledge of the scale of your variables scale-based preconditioner will be just unit matrix. INPUT PARAMETERS: State - structure which stores algorithm state -- ALGLIB -- Copyright 13.10.2010 by Bochkanov Sergey *************************************************************************/
public static void minbleicsetprecscale(minbleicstate state)
/************************************************************************* This function sets scaling coefficients for BLEIC optimizer. ALGLIB optimizers use scaling matrices to test stopping conditions (step size and gradient are scaled before comparison with tolerances). Scale of the I-th variable is a translation invariant measure of: a) "how large" the variable is b) how large the step should be to make significant changes in the function Scaling is also used by finite difference variant of the optimizer - step along I-th axis is equal to DiffStep*S[I]. In most optimizers (and in the BLEIC too) scaling is NOT a form of preconditioning. It just affects stopping conditions. You should set preconditioner by separate call to one of the MinBLEICSetPrec...() functions. There is a special preconditioning mode, however, which uses scaling coefficients to form diagonal preconditioning matrix. You can turn this mode on, if you want. But you should understand that scaling is not the same thing as preconditioning - these are two different, although related forms of tuning solver. INPUT PARAMETERS: State - structure stores algorithm state S - array[N], non-zero scaling coefficients S[i] may be negative, sign doesn't matter. -- ALGLIB -- Copyright 14.01.2011 by Bochkanov Sergey *************************************************************************/
public static void minbleicsetscale(minbleicstate state, double[] s)
/************************************************************************* This function sets maximum step length IMPORTANT: this feature is hard to combine with preconditioning. You can't set upper limit on step length, when you solve optimization problem with linear (non-boundary) constraints AND preconditioner turned on. When non-boundary constraints are present, you have to either a) use preconditioner, or b) use upper limit on step length. YOU CAN'T USE BOTH! In this case algorithm will terminate with appropriate error code. INPUT PARAMETERS: State - structure which stores algorithm state StpMax - maximum step length, >=0. Set StpMax to 0.0, if you don't want to limit step length. Use this subroutine when you optimize target function which contains exp() or other fast growing functions, and optimization algorithm makes too large steps which lead to overflow. This function allows us to reject steps that are too large (and therefore expose us to the possible overflow) without actually calculating function value at the x+stp*d. -- ALGLIB -- Copyright 02.04.2010 by Bochkanov Sergey *************************************************************************/
public static void minbleicsetstpmax(minbleicstate state, double stpmax)
/************************************************************************* This function turns on/off reporting. INPUT PARAMETERS: State - structure which stores algorithm state NeedXRep- whether iteration reports are needed or not If NeedXRep is True, algorithm will call rep() callback function if it is provided to MinBLEICOptimize(). -- ALGLIB -- Copyright 28.11.2010 by Bochkanov Sergey *************************************************************************/
public static void minbleicsetxrep(minbleicstate state, bool needxrep)
public static void function1_grad(double[] x, ref double func, double[] grad, object obj)
{
    // this callback calculates f(x0,x1) = 100*(x0+3)^4 + (x1-3)^4
    // and its derivatives df/d0 and df/dx1
    func = 100*System.Math.Pow(x[0]+3,4) + System.Math.Pow(x[1]-3,4);
    grad[0] = 400*System.Math.Pow(x[0]+3,3);
    grad[1] = 4*System.Math.Pow(x[1]-3,3);
}
public static int Main(string[] args)
{
    //
    // This example demonstrates minimization of f(x,y) = 100*(x+3)^4+(y-3)^4
    // subject to bound constraints -1<=x<=+1, -1<=y<=+1, using BLEIC optimizer.
    //
    double[] x = new double[]{0,0};
    double[] bndl = new double[]{-1,-1};
    double[] bndu = new double[]{+1,+1};
    alglib.minbleicstate state;
    alglib.minbleicreport rep;

    //
    // These variables define stopping conditions for the underlying CG algorithm.
    // They should be stringent enough in order to guarantee overall stability
    // of the outer iterations.
    //
    // We use very simple condition - |g|<=epsg
    //
    double epsg = 0.000001;
    double epsf = 0;
    double epsx = 0;

    //
    // These variables define stopping conditions for the outer iterations:
    // * epso controls convergence of outer iterations; algorithm will stop
    //   when difference between solutions of subsequent unconstrained problems
    //   will be less than 0.0001
    // * epsi controls amount of infeasibility allowed in the final solution
    //
    double epso = 0.00001;
    double epsi = 0.00001;

    //
    // Now we are ready to actually optimize something:
    // * first we create optimizer
    // * we add boundary constraints
    // * we tune stopping conditions
    // * and, finally, optimize and obtain results...
    //
    alglib.minbleiccreate(x, out state);
    alglib.minbleicsetbc(state, bndl, bndu);
    alglib.minbleicsetinnercond(state, epsg, epsf, epsx);
    alglib.minbleicsetoutercond(state, epso, epsi);
    alglib.minbleicoptimize(state, function1_grad, null, null);
    alglib.minbleicresults(state, out x, out rep);

    //
    // ...and evaluate these results
    //
    System.Console.WriteLine("{0}", rep.terminationtype); // EXPECTED: 4
    System.Console.WriteLine("{0}", alglib.ap.format(x,2)); // EXPECTED: [-1,1]
    System.Console.ReadLine();
    return 0;
}


public static void function1_grad(double[] x, ref double func, double[] grad, object obj)
{
    // this callback calculates f(x0,x1) = 100*(x0+3)^4 + (x1-3)^4
    // and its derivatives df/d0 and df/dx1
    func = 100*System.Math.Pow(x[0]+3,4) + System.Math.Pow(x[1]-3,4);
    grad[0] = 400*System.Math.Pow(x[0]+3,3);
    grad[1] = 4*System.Math.Pow(x[1]-3,3);
}
public static int Main(string[] args)
{
    //
    // This example demonstrates minimization of f(x,y) = 100*(x+3)^4+(y-3)^4
    // subject to inequality constraints:
    // * x>=2 (posed as general linear constraint),
    // * x+y>=6
    // using BLEIC optimizer.
    //
    double[] x = new double[]{5,5};
    double[,] c = new double[,]{{1,0,2},{1,1,6}};
    int[] ct = new int[]{1,1};
    alglib.minbleicstate state;
    alglib.minbleicreport rep;

    //
    // These variables define stopping conditions for the underlying CG algorithm.
    // They should be stringent enough in order to guarantee overall stability
    // of the outer iterations.
    //
    // We use very simple condition - |g|<=epsg
    //
    double epsg = 0.000001;
    double epsf = 0;
    double epsx = 0;

    //
    // These variables define stopping conditions for the outer iterations:
    // * epso controls convergence of outer iterations; algorithm will stop
    //   when difference between solutions of subsequent unconstrained problems
    //   will be less than 0.0001
    // * epsi controls amount of infeasibility allowed in the final solution
    //
    double epso = 0.00001;
    double epsi = 0.00001;

    //
    // Now we are ready to actually optimize something:
    // * first we create optimizer
    // * we add linear constraints
    // * we tune stopping conditions
    // * and, finally, optimize and obtain results...
    //
    alglib.minbleiccreate(x, out state);
    alglib.minbleicsetlc(state, c, ct);
    alglib.minbleicsetinnercond(state, epsg, epsf, epsx);
    alglib.minbleicsetoutercond(state, epso, epsi);
    alglib.minbleicoptimize(state, function1_grad, null, null);
    alglib.minbleicresults(state, out x, out rep);

    //
    // ...and evaluate these results
    //
    System.Console.WriteLine("{0}", rep.terminationtype); // EXPECTED: 4
    System.Console.WriteLine("{0}", alglib.ap.format(x,2)); // EXPECTED: [2,4]
    System.Console.ReadLine();
    return 0;
}


public static void s1_grad(double[] x, ref double func, double[] grad, object obj)
{
    //
    // this callback calculates f(x) = (1+x)^(-0.2) + (1-x)^(-0.3) + 1000*x and its gradient.
    //
    // function is trimmed when we calculate it near the singular points or outside of the [-1,+1].
    // Note that we do NOT calculate gradient in this case.
    //
    if( (x[0]<=-0.999999999999) || (x[0]>=+0.999999999999) )
    {
        func = 1.0E+300;
        return;
    }
    func = System.Math.Pow(1+x[0],-0.2) + System.Math.Pow(1-x[0],-0.3) + 1000*x[0];
    grad[0] = -0.2*System.Math.Pow(1+x[0],-1.2) +0.3*System.Math.Pow(1-x[0],-1.3) + 1000;
}
public static int Main(string[] args)
{
    //
    // This example demonstrates minimization of f(x) = (1+x)^(-0.2) + (1-x)^(-0.3) + 1000*x.
    //
    // This function is undefined outside of (-1,+1) and has singularities at x=-1 and x=+1.
    // Special technique called "function trimming" allows us to solve this optimization problem 
    // - without using boundary constraints!
    //
    // See http://www.alglib.net/optimization/tipsandtricks.php#ftrimming for more information
    // on this subject.
    //
    double[] x = new double[]{0};
    double epsg = 1.0e-6;
    double epsf = 0;
    double epsx = 0;
    double epso = 1.0e-6;
    double epsi = 1.0e-6;
    alglib.minbleicstate state;
    alglib.minbleicreport rep;

    alglib.minbleiccreate(x, out state);
    alglib.minbleicsetinnercond(state, epsg, epsf, epsx);
    alglib.minbleicsetoutercond(state, epso, epsi);
    alglib.minbleicoptimize(state, s1_grad, null, null);
    alglib.minbleicresults(state, out x, out rep);

    System.Console.WriteLine("{0}", alglib.ap.format(x,5)); // EXPECTED: [-0.99917305]
    System.Console.ReadLine();
    return 0;
}


public static void function1_func(double[] x, ref double func, object obj)
{
    // this callback calculates f(x0,x1) = 100*(x0+3)^4 + (x1-3)^4
    func = 100*System.Math.Pow(x[0]+3,4) + System.Math.Pow(x[1]-3,4);
}
public static int Main(string[] args)
{
    //
    // This example demonstrates minimization of f(x,y) = 100*(x+3)^4+(y-3)^4
    // subject to bound constraints -1<=x<=+1, -1<=y<=+1, using BLEIC optimizer.
    //
    double[] x = new double[]{0,0};
    double[] bndl = new double[]{-1,-1};
    double[] bndu = new double[]{+1,+1};
    alglib.minbleicstate state;
    alglib.minbleicreport rep;

    //
    // These variables define stopping conditions for the underlying CG algorithm.
    // They should be stringent enough in order to guarantee overall stability
    // of the outer iterations.
    //
    // We use very simple condition - |g|<=epsg
    //
    double epsg = 0.000001;
    double epsf = 0;
    double epsx = 0;

    //
    // These variables define stopping conditions for the outer iterations:
    // * epso controls convergence of outer iterations; algorithm will stop
    //   when difference between solutions of subsequent unconstrained problems
    //   will be less than 0.0001
    // * epsi controls amount of infeasibility allowed in the final solution
    //
    double epso = 0.00001;
    double epsi = 0.00001;

    //
    // This variable contains differentiation step
    //
    double diffstep = 1.0e-6;

    //
    // Now we are ready to actually optimize something:
    // * first we create optimizer
    // * we add boundary constraints
    // * we tune stopping conditions
    // * and, finally, optimize and obtain results...
    //
    alglib.minbleiccreatef(x, diffstep, out state);
    alglib.minbleicsetbc(state, bndl, bndu);
    alglib.minbleicsetinnercond(state, epsg, epsf, epsx);
    alglib.minbleicsetoutercond(state, epso, epsi);
    alglib.minbleicoptimize(state, function1_func, null, null);
    alglib.minbleicresults(state, out x, out rep);

    //
    // ...and evaluate these results
    //
    System.Console.WriteLine("{0}", rep.terminationtype); // EXPECTED: 4
    System.Console.WriteLine("{0}", alglib.ap.format(x,2)); // EXPECTED: [-1,1]
    System.Console.ReadLine();
    return 0;
}


mincgreport
mincgstate
mincgcreate
mincgcreatef
mincgoptimize
mincgrestartfrom
mincgresults
mincgresultsbuf
mincgsetcgtype
mincgsetcond
mincgsetprecdefault
mincgsetprecdiag
mincgsetprecscale
mincgsetscale
mincgsetstpmax
mincgsetxrep
mincgsuggeststep
mincg_d_1 Nonlinear optimization by CG
mincg_d_2 Nonlinear optimization with additional settings and restarts
mincg_ftrim Nonlinear optimization by CG, function with singularities
mincg_numdiff Nonlinear optimization by CG with numerical differentiation
/************************************************************************* *************************************************************************/
public class mincgreport { public int iterationscount; public int nfev; public int terminationtype; }
/************************************************************************* This object stores state of the nonlinear CG optimizer. You should use ALGLIB functions to work with this object. *************************************************************************/
public class mincgstate { }
/************************************************************************* NONLINEAR CONJUGATE GRADIENT METHOD DESCRIPTION: The subroutine minimizes function F(x) of N arguments by using one of the nonlinear conjugate gradient methods. These CG methods are globally convergent (even on non-convex functions) as long as grad(f) is Lipschitz continuous in a some neighborhood of the L = { x : f(x)<=f(x0) }. REQUIREMENTS: Algorithm will request following information during its operation: * function value F and its gradient G (simultaneously) at given point X USAGE: 1. User initializes algorithm state with MinCGCreate() call 2. User tunes solver parameters with MinCGSetCond(), MinCGSetStpMax() and other functions 3. User calls MinCGOptimize() function which takes algorithm state and pointer (delegate, etc.) to callback function which calculates F/G. 4. User calls MinCGResults() to get solution 5. Optionally, user may call MinCGRestartFrom() to solve another problem with same N but another starting point and/or another function. MinCGRestartFrom() allows to reuse already initialized structure. INPUT PARAMETERS: N - problem dimension, N>0: * if given, only leading N elements of X are used * if not given, automatically determined from size of X X - starting point, array[0..N-1]. OUTPUT PARAMETERS: State - structure which stores algorithm state -- ALGLIB -- Copyright 25.03.2010 by Bochkanov Sergey *************************************************************************/
public static void mincgcreate(double[] x, out mincgstate state) public static void mincgcreate(int n, double[] x, out mincgstate state)

Examples:   [1]  [2]  [3]  

/************************************************************************* The subroutine is finite difference variant of MinCGCreate(). It uses finite differences in order to differentiate target function. Description below contains information which is specific to this function only. We recommend to read comments on MinCGCreate() in order to get more information about creation of CG optimizer. INPUT PARAMETERS: N - problem dimension, N>0: * if given, only leading N elements of X are used * if not given, automatically determined from size of X X - starting point, array[0..N-1]. DiffStep- differentiation step, >0 OUTPUT PARAMETERS: State - structure which stores algorithm state NOTES: 1. algorithm uses 4-point central formula for differentiation. 2. differentiation step along I-th axis is equal to DiffStep*S[I] where S[] is scaling vector which can be set by MinCGSetScale() call. 3. we recommend you to use moderate values of differentiation step. Too large step will result in too large truncation errors, while too small step will result in too large numerical errors. 1.0E-6 can be good value to start with. 4. Numerical differentiation is very inefficient - one gradient calculation needs 4*N function evaluations. This function will work for any N - either small (1...10), moderate (10...100) or large (100...). However, performance penalty will be too severe for any N's except for small ones. We should also say that code which relies on numerical differentiation is less robust and precise. L-BFGS needs exact gradient values. Imprecise gradient may slow down convergence, especially on highly nonlinear problems. Thus we recommend to use this function for fast prototyping on small- dimensional problems only, and to implement analytical gradient as soon as possible. -- ALGLIB -- Copyright 16.05.2011 by Bochkanov Sergey *************************************************************************/
public static void mincgcreatef( double[] x, double diffstep, out mincgstate state) public static void mincgcreatef( int n, double[] x, double diffstep, out mincgstate state)

Examples:   [1]  

/************************************************************************* This family of functions is used to launcn iterations of nonlinear optimizer These functions accept following parameters: state - algorithm state func - callback which calculates function (or merit function) value func at given point x grad - callback which calculates function (or merit function) value func and gradient grad at given point x rep - optional callback which is called after each iteration can be NULL ptr - optional pointer which is passed to func/grad/hess/jac/rep can be NULL NOTES: 1. This function has two different implementations: one which uses exact (analytical) user-supplied gradient, and one which uses function value only and numerically differentiates function in order to obtain gradient. Depending on the specific function used to create optimizer object (either MinCGCreate() for analytical gradient or MinCGCreateF() for numerical differentiation) you should choose appropriate variant of MinCGOptimize() - one which accepts function AND gradient or one which accepts function ONLY. Be careful to choose variant of MinCGOptimize() which corresponds to your optimization scheme! Table below lists different combinations of callback (function/gradient) passed to MinCGOptimize() and specific function used to create optimizer. | USER PASSED TO MinCGOptimize() CREATED WITH | function only | function and gradient ------------------------------------------------------------ MinCGCreateF() | work FAIL MinCGCreate() | FAIL work Here "FAIL" denotes inappropriate combinations of optimizer creation function and MinCGOptimize() version. Attemps to use such combination (for example, to create optimizer with MinCGCreateF() and to pass gradient information to MinCGOptimize()) will lead to exception being thrown. Either you did not pass gradient when it WAS needed or you passed gradient when it was NOT needed. -- ALGLIB -- Copyright 20.04.2009 by Bochkanov Sergey *************************************************************************/
public static void mincgoptimize(mincgstate state, ndimensional_func func, ndimensional_rep rep, object obj) public static void mincgoptimize(mincgstate state, ndimensional_grad grad, ndimensional_rep rep, object obj)

Examples:   [1]  [2]  [3]  [4]  

/************************************************************************* This subroutine restarts CG algorithm from new point. All optimization parameters are left unchanged. This function allows to solve multiple optimization problems (which must have same number of dimensions) without object reallocation penalty. INPUT PARAMETERS: State - structure used to store algorithm state. X - new starting point. -- ALGLIB -- Copyright 30.07.2010 by Bochkanov Sergey *************************************************************************/
public static void mincgrestartfrom(mincgstate state, double[] x)

Examples:   [1]  

/************************************************************************* Conjugate gradient results INPUT PARAMETERS: State - algorithm state OUTPUT PARAMETERS: X - array[0..N-1], solution Rep - optimization report: * Rep.TerminationType completetion code: * 1 relative function improvement is no more than EpsF. * 2 relative step is no more than EpsX. * 4 gradient norm is no more than EpsG * 5 MaxIts steps was taken * 7 stopping conditions are too stringent, further improvement is impossible, we return best X found so far * 8 terminated by user * Rep.IterationsCount contains iterations count * NFEV countains number of function calculations -- ALGLIB -- Copyright 20.04.2009 by Bochkanov Sergey *************************************************************************/
public static void mincgresults( mincgstate state, out double[] x, out mincgreport rep)

Examples:   [1]  [2]  [3]  [4]  

/************************************************************************* Conjugate gradient results Buffered implementation of MinCGResults(), which uses pre-allocated buffer to store X[]. If buffer size is too small, it resizes buffer. It is intended to be used in the inner cycles of performance critical algorithms where array reallocation penalty is too large to be ignored. -- ALGLIB -- Copyright 20.04.2009 by Bochkanov Sergey *************************************************************************/
public static void mincgresultsbuf( mincgstate state, ref double[] x, mincgreport rep)
/************************************************************************* This function sets CG algorithm. INPUT PARAMETERS: State - structure which stores algorithm state CGType - algorithm type: * -1 automatic selection of the best algorithm * 0 DY (Dai and Yuan) algorithm * 1 Hybrid DY-HS algorithm -- ALGLIB -- Copyright 02.04.2010 by Bochkanov Sergey *************************************************************************/
public static void mincgsetcgtype(mincgstate state, int cgtype)
/************************************************************************* This function sets stopping conditions for CG optimization algorithm. INPUT PARAMETERS: State - structure which stores algorithm state EpsG - >=0 The subroutine finishes its work if the condition |v|<EpsG is satisfied, where: * |.| means Euclidian norm * v - scaled gradient vector, v[i]=g[i]*s[i] * g - gradient * s - scaling coefficients set by MinCGSetScale() EpsF - >=0 The subroutine finishes its work if on k+1-th iteration the condition |F(k+1)-F(k)|<=EpsF*max{|F(k)|,|F(k+1)|,1} is satisfied. EpsX - >=0 The subroutine finishes its work if on k+1-th iteration the condition |v|<=EpsX is fulfilled, where: * |.| means Euclidian norm * v - scaled step vector, v[i]=dx[i]/s[i] * dx - ste pvector, dx=X(k+1)-X(k) * s - scaling coefficients set by MinCGSetScale() MaxIts - maximum number of iterations. If MaxIts=0, the number of iterations is unlimited. Passing EpsG=0, EpsF=0, EpsX=0 and MaxIts=0 (simultaneously) will lead to automatic stopping criterion selection (small EpsX). -- ALGLIB -- Copyright 02.04.2010 by Bochkanov Sergey *************************************************************************/
public static void mincgsetcond( mincgstate state, double epsg, double epsf, double epsx, int maxits)

Examples:   [1]  [2]  [3]  

/************************************************************************* Modification of the preconditioner: preconditioning is turned off. INPUT PARAMETERS: State - structure which stores algorithm state NOTE: you can change preconditioner "on the fly", during algorithm iterations. -- ALGLIB -- Copyright 13.10.2010 by Bochkanov Sergey *************************************************************************/
public static void mincgsetprecdefault(mincgstate state)
/************************************************************************* Modification of the preconditioner: diagonal of approximate Hessian is used. INPUT PARAMETERS: State - structure which stores algorithm state D - diagonal of the approximate Hessian, array[0..N-1], (if larger, only leading N elements are used). NOTE: you can change preconditioner "on the fly", during algorithm iterations. NOTE 2: D[i] should be positive. Exception will be thrown otherwise. NOTE 3: you should pass diagonal of approximate Hessian - NOT ITS INVERSE. -- ALGLIB -- Copyright 13.10.2010 by Bochkanov Sergey *************************************************************************/
public static void mincgsetprecdiag(mincgstate state, double[] d)
/************************************************************************* Modification of the preconditioner: scale-based diagonal preconditioning. This preconditioning mode can be useful when you don't have approximate diagonal of Hessian, but you know that your variables are badly scaled (for example, one variable is in [1,10], and another in [1000,100000]), and most part of the ill-conditioning comes from different scales of vars. In this case simple scale-based preconditioner, with H[i] = 1/(s[i]^2), can greatly improve convergence. IMPRTANT: you should set scale of your variables with MinCGSetScale() call (before or after MinCGSetPrecScale() call). Without knowledge of the scale of your variables scale-based preconditioner will be just unit matrix. INPUT PARAMETERS: State - structure which stores algorithm state NOTE: you can change preconditioner "on the fly", during algorithm iterations. -- ALGLIB -- Copyright 13.10.2010 by Bochkanov Sergey *************************************************************************/
public static void mincgsetprecscale(mincgstate state)
/************************************************************************* This function sets scaling coefficients for CG optimizer. ALGLIB optimizers use scaling matrices to test stopping conditions (step size and gradient are scaled before comparison with tolerances). Scale of the I-th variable is a translation invariant measure of: a) "how large" the variable is b) how large the step should be to make significant changes in the function Scaling is also used by finite difference variant of CG optimizer - step along I-th axis is equal to DiffStep*S[I]. In most optimizers (and in the CG too) scaling is NOT a form of preconditioning. It just affects stopping conditions. You should set preconditioner by separate call to one of the MinCGSetPrec...() functions. There is special preconditioning mode, however, which uses scaling coefficients to form diagonal preconditioning matrix. You can turn this mode on, if you want. But you should understand that scaling is not the same thing as preconditioning - these are two different, although related forms of tuning solver. INPUT PARAMETERS: State - structure stores algorithm state S - array[N], non-zero scaling coefficients S[i] may be negative, sign doesn't matter. -- ALGLIB -- Copyright 14.01.2011 by Bochkanov Sergey *************************************************************************/
public static void mincgsetscale(mincgstate state, double[] s)
/************************************************************************* This function sets maximum step length INPUT PARAMETERS: State - structure which stores algorithm state StpMax - maximum step length, >=0. Set StpMax to 0.0, if you don't want to limit step length. Use this subroutine when you optimize target function which contains exp() or other fast growing functions, and optimization algorithm makes too large steps which leads to overflow. This function allows us to reject steps that are too large (and therefore expose us to the possible overflow) without actually calculating function value at the x+stp*d. -- ALGLIB -- Copyright 02.04.2010 by Bochkanov Sergey *************************************************************************/
public static void mincgsetstpmax(mincgstate state, double stpmax)
/************************************************************************* This function turns on/off reporting. INPUT PARAMETERS: State - structure which stores algorithm state NeedXRep- whether iteration reports are needed or not If NeedXRep is True, algorithm will call rep() callback function if it is provided to MinCGOptimize(). -- ALGLIB -- Copyright 02.04.2010 by Bochkanov Sergey *************************************************************************/
public static void mincgsetxrep(mincgstate state, bool needxrep)
/************************************************************************* This function allows to suggest initial step length to the CG algorithm. Suggested step length is used as starting point for the line search. It can be useful when you have badly scaled problem, i.e. when ||grad|| (which is used as initial estimate for the first step) is many orders of magnitude different from the desired step. Line search may fail on such problems without good estimate of initial step length. Imagine, for example, problem with ||grad||=10^50 and desired step equal to 0.1 Line search function will use 10^50 as initial step, then it will decrease step length by 2 (up to 20 attempts) and will get 10^44, which is still too large. This function allows us to tell than line search should be started from some moderate step length, like 1.0, so algorithm will be able to detect desired step length in a several searches. Default behavior (when no step is suggested) is to use preconditioner, if it is available, to generate initial estimate of step length. This function influences only first iteration of algorithm. It should be called between MinCGCreate/MinCGRestartFrom() call and MinCGOptimize call. Suggested step is ignored if you have preconditioner. INPUT PARAMETERS: State - structure used to store algorithm state. Stp - initial estimate of the step length. Can be zero (no estimate). -- ALGLIB -- Copyright 30.07.2010 by Bochkanov Sergey *************************************************************************/
public static void mincgsuggeststep(mincgstate state, double stp)
public static void function1_grad(double[] x, ref double func, double[] grad, object obj)
{
    // this callback calculates f(x0,x1) = 100*(x0+3)^4 + (x1-3)^4
    // and its derivatives df/d0 and df/dx1
    func = 100*System.Math.Pow(x[0]+3,4) + System.Math.Pow(x[1]-3,4);
    grad[0] = 400*System.Math.Pow(x[0]+3,3);
    grad[1] = 4*System.Math.Pow(x[1]-3,3);
}
public static int Main(string[] args)
{
    //
    // This example demonstrates minimization of f(x,y) = 100*(x+3)^4+(y-3)^4
    // with nonlinear conjugate gradient method.
    //
    double[] x = new double[]{0,0};
    double epsg = 0.0000000001;
    double epsf = 0;
    double epsx = 0;
    int maxits = 0;
    alglib.mincgstate state;
    alglib.mincgreport rep;

    alglib.mincgcreate(x, out state);
    alglib.mincgsetcond(state, epsg, epsf, epsx, maxits);
    alglib.mincgoptimize(state, function1_grad, null, null);
    alglib.mincgresults(state, out x, out rep);

    System.Console.WriteLine("{0}", rep.terminationtype); // EXPECTED: 4
    System.Console.WriteLine("{0}", alglib.ap.format(x,2)); // EXPECTED: [-3,3]
    System.Console.ReadLine();
    return 0;
}


public static void function1_grad(double[] x, ref double func, double[] grad, object obj)
{
    // this callback calculates f(x0,x1) = 100*(x0+3)^4 + (x1-3)^4
    // and its derivatives df/d0 and df/dx1
    func = 100*System.Math.Pow(x[0]+3,4) + System.Math.Pow(x[1]-3,4);
    grad[0] = 400*System.Math.Pow(x[0]+3,3);
    grad[1] = 4*System.Math.Pow(x[1]-3,3);
}
public static int Main(string[] args)
{
    //
    // This example demonstrates minimization of f(x,y) = 100*(x+3)^4+(y-3)^4
    // with nonlinear conjugate gradient method.
    //
    // Several advanced techniques are demonstrated:
    // * upper limit on step size
    // * restart from new point
    //
    double[] x = new double[]{0,0};
    double epsg = 0.0000000001;
    double epsf = 0;
    double epsx = 0;
    double stpmax = 0.1;
    int maxits = 0;
    alglib.mincgstate state;
    alglib.mincgreport rep;

    // first run
    alglib.mincgcreate(x, out state);
    alglib.mincgsetcond(state, epsg, epsf, epsx, maxits);
    alglib.mincgsetstpmax(state, stpmax);
    alglib.mincgoptimize(state, function1_grad, null, null);
    alglib.mincgresults(state, out x, out rep);

    System.Console.WriteLine("{0}", alglib.ap.format(x,2)); // EXPECTED: [-3,3]

    // second run - algorithm is restarted with mincgrestartfrom()
    x = new double[]{10,10};
    alglib.mincgrestartfrom(state, x);
    alglib.mincgoptimize(state, function1_grad, null, null);
    alglib.mincgresults(state, out x, out rep);

    System.Console.WriteLine("{0}", rep.terminationtype); // EXPECTED: 4
    System.Console.WriteLine("{0}", alglib.ap.format(x,2)); // EXPECTED: [-3,3]
    System.Console.ReadLine();
    return 0;
}


public static void s1_grad(double[] x, ref double func, double[] grad, object obj)
{
    //
    // this callback calculates f(x) = (1+x)^(-0.2) + (1-x)^(-0.3) + 1000*x and its gradient.
    //
    // function is trimmed when we calculate it near the singular points or outside of the [-1,+1].
    // Note that we do NOT calculate gradient in this case.
    //
    if( (x[0]<=-0.999999999999) || (x[0]>=+0.999999999999) )
    {
        func = 1.0E+300;
        return;
    }
    func = System.Math.Pow(1+x[0],-0.2) + System.Math.Pow(1-x[0],-0.3) + 1000*x[0];
    grad[0] = -0.2*System.Math.Pow(1+x[0],-1.2) +0.3*System.Math.Pow(1-x[0],-1.3) + 1000;
}
public static int Main(string[] args)
{
    //
    // This example demonstrates minimization of f(x) = (1+x)^(-0.2) + (1-x)^(-0.3) + 1000*x.
    // This function has singularities at the boundary of the [-1,+1], but technique called
    // "function trimming" allows us to solve this optimization problem.
    //
    // See http://www.alglib.net/optimization/tipsandtricks.php#ftrimming for more information
    // on this subject.
    //
    double[] x = new double[]{0};
    double epsg = 1.0e-6;
    double epsf = 0;
    double epsx = 0;
    int maxits = 0;
    alglib.mincgstate state;
    alglib.mincgreport rep;

    alglib.mincgcreate(x, out state);
    alglib.mincgsetcond(state, epsg, epsf, epsx, maxits);
    alglib.mincgoptimize(state, s1_grad, null, null);
    alglib.mincgresults(state, out x, out rep);

    System.Console.WriteLine("{0}", alglib.ap.format(x,5)); // EXPECTED: [-0.99917305]
    System.Console.ReadLine();
    return 0;
}


public static void function1_func(double[] x, ref double func, object obj)
{
    // this callback calculates f(x0,x1) = 100*(x0+3)^4 + (x1-3)^4
    func = 100*System.Math.Pow(x[0]+3,4) + System.Math.Pow(x[1]-3,4);
}
public static int Main(string[] args)
{
    //
    // This example demonstrates minimization of f(x,y) = 100*(x+3)^4+(y-3)^4
    // using numerical differentiation to calculate gradient.
    //
    double[] x = new double[]{0,0};
    double epsg = 0.0000000001;
    double epsf = 0;
    double epsx = 0;
    double diffstep = 1.0e-6;
    int maxits = 0;
    alglib.mincgstate state;
    alglib.mincgreport rep;

    alglib.mincgcreatef(x, diffstep, out state);
    alglib.mincgsetcond(state, epsg, epsf, epsx, maxits);
    alglib.mincgoptimize(state, function1_func, null, null);
    alglib.mincgresults(state, out x, out rep);

    System.Console.WriteLine("{0}", rep.terminationtype); // EXPECTED: 4
    System.Console.WriteLine("{0}", alglib.ap.format(x,2)); // EXPECTED: [-3,3]
    System.Console.ReadLine();
    return 0;
}


minasareport
minasastate
minasacreate
minasaoptimize
minasarestartfrom
minasaresults
minasaresultsbuf
minasasetalgorithm
minasasetcond
minasasetstpmax
minasasetxrep
minbleicsetbarrierdecay
minbleicsetbarrierwidth
minlbfgssetcholeskypreconditioner
minlbfgssetdefaultpreconditioner
/************************************************************************* *************************************************************************/
public class minasareport { public int iterationscount; public int nfev; public int terminationtype; public int activeconstraints; }
/************************************************************************* *************************************************************************/
public class minasastate { }
/************************************************************************* Obsolete optimization algorithm. Was replaced by MinBLEIC subpackage. -- ALGLIB -- Copyright 25.03.2010 by Bochkanov Sergey *************************************************************************/
public static void minasacreate( double[] x, double[] bndl, double[] bndu, out minasastate state) public static void minasacreate( int n, double[] x, double[] bndl, double[] bndu, out minasastate state)
/************************************************************************* This family of functions is used to launcn iterations of nonlinear optimizer These functions accept following parameters: state - algorithm state grad - callback which calculates function (or merit function) value func and gradient grad at given point x rep - optional callback which is called after each iteration can be NULL ptr - optional pointer which is passed to func/grad/hess/jac/rep can be NULL -- ALGLIB -- Copyright 20.03.2009 by Bochkanov Sergey *************************************************************************/
public static void minasaoptimize(minasastate state, ndimensional_grad grad, ndimensional_rep rep, object obj)
/************************************************************************* Obsolete optimization algorithm. Was replaced by MinBLEIC subpackage. -- ALGLIB -- Copyright 30.07.2010 by Bochkanov Sergey *************************************************************************/
public static void minasarestartfrom( minasastate state, double[] x, double[] bndl, double[] bndu)
/************************************************************************* Obsolete optimization algorithm. Was replaced by MinBLEIC subpackage. -- ALGLIB -- Copyright 20.03.2009 by Bochkanov Sergey *************************************************************************/
public static void minasaresults( minasastate state, out double[] x, out minasareport rep)
/************************************************************************* Obsolete optimization algorithm. Was replaced by MinBLEIC subpackage. -- ALGLIB -- Copyright 20.03.2009 by Bochkanov Sergey *************************************************************************/
public static void minasaresultsbuf( minasastate state, ref double[] x, minasareport rep)
/************************************************************************* Obsolete optimization algorithm. Was replaced by MinBLEIC subpackage. -- ALGLIB -- Copyright 02.04.2010 by Bochkanov Sergey *************************************************************************/
public static void minasasetalgorithm(minasastate state, int algotype)
/************************************************************************* Obsolete optimization algorithm. Was replaced by MinBLEIC subpackage. -- ALGLIB -- Copyright 02.04.2010 by Bochkanov Sergey *************************************************************************/
public static void minasasetcond( minasastate state, double epsg, double epsf, double epsx, int maxits)
/************************************************************************* Obsolete optimization algorithm. Was replaced by MinBLEIC subpackage. -- ALGLIB -- Copyright 02.04.2010 by Bochkanov Sergey *************************************************************************/
public static void minasasetstpmax(minasastate state, double stpmax)
/************************************************************************* Obsolete optimization algorithm. Was replaced by MinBLEIC subpackage. -- ALGLIB -- Copyright 02.04.2010 by Bochkanov Sergey *************************************************************************/
public static void minasasetxrep(minasastate state, bool needxrep)
/************************************************************************* This is obsolete function which was used by previous version of the BLEIC optimizer. It does nothing in the current version of BLEIC. -- ALGLIB -- Copyright 28.11.2010 by Bochkanov Sergey *************************************************************************/
public static void minbleicsetbarrierdecay( minbleicstate state, double mudecay)
/************************************************************************* This is obsolete function which was used by previous version of the BLEIC optimizer. It does nothing in the current version of BLEIC. -- ALGLIB -- Copyright 28.11.2010 by Bochkanov Sergey *************************************************************************/
public static void minbleicsetbarrierwidth(minbleicstate state, double mu)
/************************************************************************* Obsolete function, use MinLBFGSSetCholeskyPreconditioner() instead. -- ALGLIB -- Copyright 13.10.2010 by Bochkanov Sergey *************************************************************************/
public static void minlbfgssetcholeskypreconditioner( minlbfgsstate state, double[,] p, bool isupper)
/************************************************************************* Obsolete function, use MinLBFGSSetPrecDefault() instead. -- ALGLIB -- Copyright 13.10.2010 by Bochkanov Sergey *************************************************************************/
public static void minlbfgssetdefaultpreconditioner(minlbfgsstate state)
minlbfgsreport
minlbfgsstate
minlbfgscreate
minlbfgscreatef
minlbfgsoptimize
minlbfgsrestartfrom
minlbfgsresults
minlbfgsresultsbuf
minlbfgssetcond
minlbfgssetpreccholesky
minlbfgssetprecdefault
minlbfgssetprecdiag
minlbfgssetprecscale
minlbfgssetscale
minlbfgssetstpmax
minlbfgssetxrep
minlbfgs_d_1 Nonlinear optimization by L-BFGS
minlbfgs_d_2 Nonlinear optimization with additional settings and restarts
minlbfgs_ftrim Nonlinear optimization by LBFGS, function with singularities
minlbfgs_numdiff Nonlinear optimization by L-BFGS with numerical differentiation
/************************************************************************* *************************************************************************/
public class minlbfgsreport { public int iterationscount; public int nfev; public int terminationtype; }
/************************************************************************* *************************************************************************/
public class minlbfgsstate { }
/************************************************************************* LIMITED MEMORY BFGS METHOD FOR LARGE SCALE OPTIMIZATION DESCRIPTION: The subroutine minimizes function F(x) of N arguments by using a quasi- Newton method (LBFGS scheme) which is optimized to use a minimum amount of memory. The subroutine generates the approximation of an inverse Hessian matrix by using information about the last M steps of the algorithm (instead of N). It lessens a required amount of memory from a value of order N^2 to a value of order 2*N*M. REQUIREMENTS: Algorithm will request following information during its operation: * function value F and its gradient G (simultaneously) at given point X USAGE: 1. User initializes algorithm state with MinLBFGSCreate() call 2. User tunes solver parameters with MinLBFGSSetCond() MinLBFGSSetStpMax() and other functions 3. User calls MinLBFGSOptimize() function which takes algorithm state and pointer (delegate, etc.) to callback function which calculates F/G. 4. User calls MinLBFGSResults() to get solution 5. Optionally user may call MinLBFGSRestartFrom() to solve another problem with same N/M but another starting point and/or another function. MinLBFGSRestartFrom() allows to reuse already initialized structure. INPUT PARAMETERS: N - problem dimension. N>0 M - number of corrections in the BFGS scheme of Hessian approximation update. Recommended value: 3<=M<=7. The smaller value causes worse convergence, the bigger will not cause a considerably better convergence, but will cause a fall in the performance. M<=N. X - initial solution approximation, array[0..N-1]. OUTPUT PARAMETERS: State - structure which stores algorithm state NOTES: 1. you may tune stopping conditions with MinLBFGSSetCond() function 2. if target function contains exp() or other fast growing functions, and optimization algorithm makes too large steps which leads to overflow, use MinLBFGSSetStpMax() function to bound algorithm's steps. However, L-BFGS rarely needs such a tuning. -- ALGLIB -- Copyright 02.04.2010 by Bochkanov Sergey *************************************************************************/
public static void minlbfgscreate( int m, double[] x, out minlbfgsstate state) public static void minlbfgscreate( int n, int m, double[] x, out minlbfgsstate state)

Examples:   [1]  [2]  [3]  

/************************************************************************* The subroutine is finite difference variant of MinLBFGSCreate(). It uses finite differences in order to differentiate target function. Description below contains information which is specific to this function only. We recommend to read comments on MinLBFGSCreate() in order to get more information about creation of LBFGS optimizer. INPUT PARAMETERS: N - problem dimension, N>0: * if given, only leading N elements of X are used * if not given, automatically determined from size of X M - number of corrections in the BFGS scheme of Hessian approximation update. Recommended value: 3<=M<=7. The smaller value causes worse convergence, the bigger will not cause a considerably better convergence, but will cause a fall in the performance. M<=N. X - starting point, array[0..N-1]. DiffStep- differentiation step, >0 OUTPUT PARAMETERS: State - structure which stores algorithm state NOTES: 1. algorithm uses 4-point central formula for differentiation. 2. differentiation step along I-th axis is equal to DiffStep*S[I] where S[] is scaling vector which can be set by MinLBFGSSetScale() call. 3. we recommend you to use moderate values of differentiation step. Too large step will result in too large truncation errors, while too small step will result in too large numerical errors. 1.0E-6 can be good value to start with. 4. Numerical differentiation is very inefficient - one gradient calculation needs 4*N function evaluations. This function will work for any N - either small (1...10), moderate (10...100) or large (100...). However, performance penalty will be too severe for any N's except for small ones. We should also say that code which relies on numerical differentiation is less robust and precise. LBFGS needs exact gradient values. Imprecise gradient may slow down convergence, especially on highly nonlinear problems. Thus we recommend to use this function for fast prototyping on small- dimensional problems only, and to implement analytical gradient as soon as possible. -- ALGLIB -- Copyright 16.05.2011 by Bochkanov Sergey *************************************************************************/
public static void minlbfgscreatef( int m, double[] x, double diffstep, out minlbfgsstate state) public static void minlbfgscreatef( int n, int m, double[] x, double diffstep, out minlbfgsstate state)

Examples:   [1]  

/************************************************************************* This family of functions is used to launcn iterations of nonlinear optimizer These functions accept following parameters: state - algorithm state func - callback which calculates function (or merit function) value func at given point x grad - callback which calculates function (or merit function) value func and gradient grad at given point x rep - optional callback which is called after each iteration can be NULL ptr - optional pointer which is passed to func/grad/hess/jac/rep can be NULL NOTES: 1. This function has two different implementations: one which uses exact (analytical) user-supplied gradient, and one which uses function value only and numerically differentiates function in order to obtain gradient. Depending on the specific function used to create optimizer object (either MinLBFGSCreate() for analytical gradient or MinLBFGSCreateF() for numerical differentiation) you should choose appropriate variant of MinLBFGSOptimize() - one which accepts function AND gradient or one which accepts function ONLY. Be careful to choose variant of MinLBFGSOptimize() which corresponds to your optimization scheme! Table below lists different combinations of callback (function/gradient) passed to MinLBFGSOptimize() and specific function used to create optimizer. | USER PASSED TO MinLBFGSOptimize() CREATED WITH | function only | function and gradient ------------------------------------------------------------ MinLBFGSCreateF() | work FAIL MinLBFGSCreate() | FAIL work Here "FAIL" denotes inappropriate combinations of optimizer creation function and MinLBFGSOptimize() version. Attemps to use such combination (for example, to create optimizer with MinLBFGSCreateF() and to pass gradient information to MinCGOptimize()) will lead to exception being thrown. Either you did not pass gradient when it WAS needed or you passed gradient when it was NOT needed. -- ALGLIB -- Copyright 20.03.2009 by Bochkanov Sergey *************************************************************************/
public static void minlbfgsoptimize(minlbfgsstate state, ndimensional_func func, ndimensional_rep rep, object obj) public static void minlbfgsoptimize(minlbfgsstate state, ndimensional_grad grad, ndimensional_rep rep, object obj)

Examples:   [1]  [2]  [3]  [4]  

/************************************************************************* This subroutine restarts LBFGS algorithm from new point. All optimization parameters are left unchanged. This function allows to solve multiple optimization problems (which must have same number of dimensions) without object reallocation penalty. INPUT PARAMETERS: State - structure used to store algorithm state X - new starting point. -- ALGLIB -- Copyright 30.07.2010 by Bochkanov Sergey *************************************************************************/
public static void minlbfgsrestartfrom(minlbfgsstate state, double[] x)

Examples:   [1]  

/************************************************************************* L-BFGS algorithm results INPUT PARAMETERS: State - algorithm state OUTPUT PARAMETERS: X - array[0..N-1], solution Rep - optimization report: * Rep.TerminationType completetion code: * -2 rounding errors prevent further improvement. X contains best point found. * -1 incorrect parameters were specified * 1 relative function improvement is no more than EpsF. * 2 relative step is no more than EpsX. * 4 gradient norm is no more than EpsG * 5 MaxIts steps was taken * 7 stopping conditions are too stringent, further improvement is impossible * Rep.IterationsCount contains iterations count * NFEV countains number of function calculations -- ALGLIB -- Copyright 02.04.2010 by Bochkanov Sergey *************************************************************************/
public static void minlbfgsresults( minlbfgsstate state, out double[] x, out minlbfgsreport rep)

Examples:   [1]  [2]  [3]  [4]  

/************************************************************************* L-BFGS algorithm results Buffered implementation of MinLBFGSResults which uses pre-allocated buffer to store X[]. If buffer size is too small, it resizes buffer. It is intended to be used in the inner cycles of performance critical algorithms where array reallocation penalty is too large to be ignored. -- ALGLIB -- Copyright 20.08.2010 by Bochkanov Sergey *************************************************************************/
public static void minlbfgsresultsbuf( minlbfgsstate state, ref double[] x, minlbfgsreport rep)
/************************************************************************* This function sets stopping conditions for L-BFGS optimization algorithm. INPUT PARAMETERS: State - structure which stores algorithm state EpsG - >=0 The subroutine finishes its work if the condition |v|<EpsG is satisfied, where: * |.| means Euclidian norm * v - scaled gradient vector, v[i]=g[i]*s[i] * g - gradient * s - scaling coefficients set by MinLBFGSSetScale() EpsF - >=0 The subroutine finishes its work if on k+1-th iteration the condition |F(k+1)-F(k)|<=EpsF*max{|F(k)|,|F(k+1)|,1} is satisfied. EpsX - >=0 The subroutine finishes its work if on k+1-th iteration the condition |v|<=EpsX is fulfilled, where: * |.| means Euclidian norm * v - scaled step vector, v[i]=dx[i]/s[i] * dx - ste pvector, dx=X(k+1)-X(k) * s - scaling coefficients set by MinLBFGSSetScale() MaxIts - maximum number of iterations. If MaxIts=0, the number of iterations is unlimited. Passing EpsG=0, EpsF=0, EpsX=0 and MaxIts=0 (simultaneously) will lead to automatic stopping criterion selection (small EpsX). -- ALGLIB -- Copyright 02.04.2010 by Bochkanov Sergey *************************************************************************/
public static void minlbfgssetcond( minlbfgsstate state, double epsg, double epsf, double epsx, int maxits)

Examples:   [1]  [2]  [3]  

/************************************************************************* Modification of the preconditioner: Cholesky factorization of approximate Hessian is used. INPUT PARAMETERS: State - structure which stores algorithm state P - triangular preconditioner, Cholesky factorization of the approximate Hessian. array[0..N-1,0..N-1], (if larger, only leading N elements are used). IsUpper - whether upper or lower triangle of P is given (other triangle is not referenced) After call to this function preconditioner is changed to P (P is copied into the internal buffer). NOTE: you can change preconditioner "on the fly", during algorithm iterations. NOTE 2: P should be nonsingular. Exception will be thrown otherwise. -- ALGLIB -- Copyright 13.10.2010 by Bochkanov Sergey *************************************************************************/
public static void minlbfgssetpreccholesky( minlbfgsstate state, double[,] p, bool isupper)
/************************************************************************* Modification of the preconditioner: default preconditioner (simple scaling, same for all elements of X) is used. INPUT PARAMETERS: State - structure which stores algorithm state NOTE: you can change preconditioner "on the fly", during algorithm iterations. -- ALGLIB -- Copyright 13.10.2010 by Bochkanov Sergey *************************************************************************/
public static void minlbfgssetprecdefault(minlbfgsstate state)
/************************************************************************* Modification of the preconditioner: diagonal of approximate Hessian is used. INPUT PARAMETERS: State - structure which stores algorithm state D - diagonal of the approximate Hessian, array[0..N-1], (if larger, only leading N elements are used). NOTE: you can change preconditioner "on the fly", during algorithm iterations. NOTE 2: D[i] should be positive. Exception will be thrown otherwise. NOTE 3: you should pass diagonal of approximate Hessian - NOT ITS INVERSE. -- ALGLIB -- Copyright 13.10.2010 by Bochkanov Sergey *************************************************************************/
public static void minlbfgssetprecdiag(minlbfgsstate state, double[] d)
/************************************************************************* Modification of the preconditioner: scale-based diagonal preconditioning. This preconditioning mode can be useful when you don't have approximate diagonal of Hessian, but you know that your variables are badly scaled (for example, one variable is in [1,10], and another in [1000,100000]), and most part of the ill-conditioning comes from different scales of vars. In this case simple scale-based preconditioner, with H[i] = 1/(s[i]^2), can greatly improve convergence. IMPRTANT: you should set scale of your variables with MinLBFGSSetScale() call (before or after MinLBFGSSetPrecScale() call). Without knowledge of the scale of your variables scale-based preconditioner will be just unit matrix. INPUT PARAMETERS: State - structure which stores algorithm state -- ALGLIB -- Copyright 13.10.2010 by Bochkanov Sergey *************************************************************************/
public static void minlbfgssetprecscale(minlbfgsstate state)
/************************************************************************* This function sets scaling coefficients for LBFGS optimizer. ALGLIB optimizers use scaling matrices to test stopping conditions (step size and gradient are scaled before comparison with tolerances). Scale of the I-th variable is a translation invariant measure of: a) "how large" the variable is b) how large the step should be to make significant changes in the function Scaling is also used by finite difference variant of the optimizer - step along I-th axis is equal to DiffStep*S[I]. In most optimizers (and in the LBFGS too) scaling is NOT a form of preconditioning. It just affects stopping conditions. You should set preconditioner by separate call to one of the MinLBFGSSetPrec...() functions. There is special preconditioning mode, however, which uses scaling coefficients to form diagonal preconditioning matrix. You can turn this mode on, if you want. But you should understand that scaling is not the same thing as preconditioning - these are two different, although related forms of tuning solver. INPUT PARAMETERS: State - structure stores algorithm state S - array[N], non-zero scaling coefficients S[i] may be negative, sign doesn't matter. -- ALGLIB -- Copyright 14.01.2011 by Bochkanov Sergey *************************************************************************/
public static void minlbfgssetscale(minlbfgsstate state, double[] s)
/************************************************************************* This function sets maximum step length INPUT PARAMETERS: State - structure which stores algorithm state StpMax - maximum step length, >=0. Set StpMax to 0.0 (default), if you don't want to limit step length. Use this subroutine when you optimize target function which contains exp() or other fast growing functions, and optimization algorithm makes too large steps which leads to overflow. This function allows us to reject steps that are too large (and therefore expose us to the possible overflow) without actually calculating function value at the x+stp*d. -- ALGLIB -- Copyright 02.04.2010 by Bochkanov Sergey *************************************************************************/
public static void minlbfgssetstpmax(minlbfgsstate state, double stpmax)
/************************************************************************* This function turns on/off reporting. INPUT PARAMETERS: State - structure which stores algorithm state NeedXRep- whether iteration reports are needed or not If NeedXRep is True, algorithm will call rep() callback function if it is provided to MinLBFGSOptimize(). -- ALGLIB -- Copyright 02.04.2010 by Bochkanov Sergey *************************************************************************/
public static void minlbfgssetxrep(minlbfgsstate state, bool needxrep)
public static void function1_grad(double[] x, ref double func, double[] grad, object obj)
{
    // this callback calculates f(x0,x1) = 100*(x0+3)^4 + (x1-3)^4
    // and its derivatives df/d0 and df/dx1
    func = 100*System.Math.Pow(x[0]+3,4) + System.Math.Pow(x[1]-3,4);
    grad[0] = 400*System.Math.Pow(x[0]+3,3);
    grad[1] = 4*System.Math.Pow(x[1]-3,3);
}
public static int Main(string[] args)
{
    //
    // This example demonstrates minimization of f(x,y) = 100*(x+3)^4+(y-3)^4
    // using LBFGS method.
    //
    double[] x = new double[]{0,0};
    double epsg = 0.0000000001;
    double epsf = 0;
    double epsx = 0;
    int maxits = 0;
    alglib.minlbfgsstate state;
    alglib.minlbfgsreport rep;

    alglib.minlbfgscreate(1, x, out state);
    alglib.minlbfgssetcond(state, epsg, epsf, epsx, maxits);
    alglib.minlbfgsoptimize(state, function1_grad, null, null);
    alglib.minlbfgsresults(state, out x, out rep);

    System.Console.WriteLine("{0}", rep.terminationtype); // EXPECTED: 4
    System.Console.WriteLine("{0}", alglib.ap.format(x,2)); // EXPECTED: [-3,3]
    System.Console.ReadLine();
    return 0;
}


public static void function1_grad(double[] x, ref double func, double[] grad, object obj)
{
    // this callback calculates f(x0,x1) = 100*(x0+3)^4 + (x1-3)^4
    // and its derivatives df/d0 and df/dx1
    func = 100*System.Math.Pow(x[0]+3,4) + System.Math.Pow(x[1]-3,4);
    grad[0] = 400*System.Math.Pow(x[0]+3,3);
    grad[1] = 4*System.Math.Pow(x[1]-3,3);
}
public static int Main(string[] args)
{
    //
    // This example demonstrates minimization of f(x,y) = 100*(x+3)^4+(y-3)^4
    // using LBFGS method.
    //
    // Several advanced techniques are demonstrated:
    // * upper limit on step size
    // * restart from new point
    //
    double[] x = new double[]{0,0};
    double epsg = 0.0000000001;
    double epsf = 0;
    double epsx = 0;
    double stpmax = 0.1;
    int maxits = 0;
    alglib.minlbfgsstate state;
    alglib.minlbfgsreport rep;

    // first run
    alglib.minlbfgscreate(1, x, out state);
    alglib.minlbfgssetcond(state, epsg, epsf, epsx, maxits);
    alglib.minlbfgssetstpmax(state, stpmax);
    alglib.minlbfgsoptimize(state, function1_grad, null, null);
    alglib.minlbfgsresults(state, out x, out rep);

    System.Console.WriteLine("{0}", alglib.ap.format(x,2)); // EXPECTED: [-3,3]

    // second run - algorithm is restarted
    x = new double[]{10,10};
    alglib.minlbfgsrestartfrom(state, x);
    alglib.minlbfgsoptimize(state, function1_grad, null, null);
    alglib.minlbfgsresults(state, out x, out rep);

    System.Console.WriteLine("{0}", rep.terminationtype); // EXPECTED: 4
    System.Console.WriteLine("{0}", alglib.ap.format(x,2)); // EXPECTED: [-3,3]
    System.Console.ReadLine();
    return 0;
}


public static void s1_grad(double[] x, ref double func, double[] grad, object obj)
{
    //
    // this callback calculates f(x) = (1+x)^(-0.2) + (1-x)^(-0.3) + 1000*x and its gradient.
    //
    // function is trimmed when we calculate it near the singular points or outside of the [-1,+1].
    // Note that we do NOT calculate gradient in this case.
    //
    if( (x[0]<=-0.999999999999) || (x[0]>=+0.999999999999) )
    {
        func = 1.0E+300;
        return;
    }
    func = System.Math.Pow(1+x[0],-0.2) + System.Math.Pow(1-x[0],-0.3) + 1000*x[0];
    grad[0] = -0.2*System.Math.Pow(1+x[0],-1.2) +0.3*System.Math.Pow(1-x[0],-1.3) + 1000;
}
public static int Main(string[] args)
{
    //
    // This example demonstrates minimization of f(x) = (1+x)^(-0.2) + (1-x)^(-0.3) + 1000*x.
    // This function has singularities at the boundary of the [-1,+1], but technique called
    // "function trimming" allows us to solve this optimization problem.
    //
    // See http://www.alglib.net/optimization/tipsandtricks.php#ftrimming for more information
    // on this subject.
    //
    double[] x = new double[]{0};
    double epsg = 1.0e-6;
    double epsf = 0;
    double epsx = 0;
    int maxits = 0;
    alglib.minlbfgsstate state;
    alglib.minlbfgsreport rep;

    alglib.minlbfgscreate(1, x, out state);
    alglib.minlbfgssetcond(state, epsg, epsf, epsx, maxits);
    alglib.minlbfgsoptimize(state, s1_grad, null, null);
    alglib.minlbfgsresults(state, out x, out rep);

    System.Console.WriteLine("{0}", alglib.ap.format(x,5)); // EXPECTED: [-0.99917305]
    System.Console.ReadLine();
    return 0;
}


public static void function1_func(double[] x, ref double func, object obj)
{
    // this callback calculates f(x0,x1) = 100*(x0+3)^4 + (x1-3)^4
    func = 100*System.Math.Pow(x[0]+3,4) + System.Math.Pow(x[1]-3,4);
}
public static int Main(string[] args)
{
    //
    // This example demonstrates minimization of f(x,y) = 100*(x+3)^4+(y-3)^4
    // using numerical differentiation to calculate gradient.
    //
    double[] x = new double[]{0,0};
    double epsg = 0.0000000001;
    double epsf = 0;
    double epsx = 0;
    double diffstep = 1.0e-6;
    int maxits = 0;
    alglib.minlbfgsstate state;
    alglib.minlbfgsreport rep;

    alglib.minlbfgscreatef(1, x, diffstep, out state);
    alglib.minlbfgssetcond(state, epsg, epsf, epsx, maxits);
    alglib.minlbfgsoptimize(state, function1_func, null, null);
    alglib.minlbfgsresults(state, out x, out rep);

    System.Console.WriteLine("{0}", rep.terminationtype); // EXPECTED: 4
    System.Console.WriteLine("{0}", alglib.ap.format(x,2)); // EXPECTED: [-3,3]
    System.Console.ReadLine();
    return 0;
}


minlmreport
minlmstate
minlmcreatefgh
minlmcreatefgj
minlmcreatefj
minlmcreatev
minlmcreatevgj
minlmcreatevj
minlmoptimize
minlmrestartfrom
minlmresults
minlmresultsbuf
minlmsetacctype
minlmsetbc
minlmsetcond
minlmsetscale
minlmsetstpmax
minlmsetxrep
minlm_d_fgh Nonlinear Hessian-based optimization for general functions
minlm_d_restarts Efficient restarts of LM optimizer
minlm_d_v Nonlinear least squares optimization using function vector only
minlm_d_vb Bound constrained nonlinear least squares optimization
minlm_d_vj Nonlinear least squares optimization using function vector and Jacobian
/************************************************************************* Optimization report, filled by MinLMResults() function FIELDS: * TerminationType, completetion code: * -9 derivative correctness check failed; see Rep.WrongNum, Rep.WrongI, Rep.WrongJ for more information. * 1 relative function improvement is no more than EpsF. * 2 relative step is no more than EpsX. * 4 gradient is no more than EpsG. * 5 MaxIts steps was taken * 7 stopping conditions are too stringent, further improvement is impossible * IterationsCount, contains iterations count * NFunc, number of function calculations * NJac, number of Jacobi matrix calculations * NGrad, number of gradient calculations * NHess, number of Hessian calculations * NCholesky, number of Cholesky decomposition calculations *************************************************************************/
public class minlmreport { public int iterationscount; public int terminationtype; public int nfunc; public int njac; public int ngrad; public int nhess; public int ncholesky; }
/************************************************************************* Levenberg-Marquardt optimizer. This structure should be created using one of the MinLMCreate???() functions. You should not access its fields directly; use ALGLIB functions to work with it. *************************************************************************/
public class minlmstate { }
/************************************************************************* LEVENBERG-MARQUARDT-LIKE METHOD FOR NON-LINEAR OPTIMIZATION DESCRIPTION: This function is used to find minimum of general form (not "sum-of- -squares") function F = F(x[0], ..., x[n-1]) using its gradient and Hessian. Levenberg-Marquardt modification with L-BFGS pre-optimization and internal pre-conditioned L-BFGS optimization after each Levenberg-Marquardt step is used. REQUIREMENTS: This algorithm will request following information during its operation: * function value F at given point X * F and gradient G (simultaneously) at given point X * F, G and Hessian H (simultaneously) at given point X There are several overloaded versions of MinLMOptimize() function which correspond to different LM-like optimization algorithms provided by this unit. You should choose version which accepts func(), grad() and hess() function pointers. First pointer is used to calculate F at given point, second one calculates F(x) and grad F(x), third one calculates F(x), grad F(x), hess F(x). You can try to initialize MinLMState structure with FGH-function and then use incorrect version of MinLMOptimize() (for example, version which does not provide Hessian matrix), but it will lead to exception being thrown after first attempt to calculate Hessian. USAGE: 1. User initializes algorithm state with MinLMCreateFGH() call 2. User tunes solver parameters with MinLMSetCond(), MinLMSetStpMax() and other functions 3. User calls MinLMOptimize() function which takes algorithm state and pointers (delegates, etc.) to callback functions. 4. User calls MinLMResults() to get solution 5. Optionally, user may call MinLMRestartFrom() to solve another problem with same N but another starting point and/or another function. MinLMRestartFrom() allows to reuse already initialized structure. INPUT PARAMETERS: N - dimension, N>1 * if given, only leading N elements of X are used * if not given, automatically determined from size of X X - initial solution, array[0..N-1] OUTPUT PARAMETERS: State - structure which stores algorithm state NOTES: 1. you may tune stopping conditions with MinLMSetCond() function 2. if target function contains exp() or other fast growing functions, and optimization algorithm makes too large steps which leads to overflow, use MinLMSetStpMax() function to bound algorithm's steps. -- ALGLIB -- Copyright 30.03.2009 by Bochkanov Sergey *************************************************************************/
public static void minlmcreatefgh(double[] x, out minlmstate state) public static void minlmcreatefgh(int n, double[] x, out minlmstate state)

Examples:   [1]  

/************************************************************************* This is obsolete function. Since ALGLIB 3.3 it is equivalent to MinLMCreateFJ(). -- ALGLIB -- Copyright 30.03.2009 by Bochkanov Sergey *************************************************************************/
public static void minlmcreatefgj(int m, double[] x, out minlmstate state) public static void minlmcreatefgj( int n, int m, double[] x, out minlmstate state)
/************************************************************************* This function is considered obsolete since ALGLIB 3.1.0 and is present for backward compatibility only. We recommend to use MinLMCreateVJ, which provides similar, but more consistent and feature-rich interface. -- ALGLIB -- Copyright 30.03.2009 by Bochkanov Sergey *************************************************************************/
public static void minlmcreatefj(int m, double[] x, out minlmstate state) public static void minlmcreatefj( int n, int m, double[] x, out minlmstate state)
/************************************************************************* IMPROVED LEVENBERG-MARQUARDT METHOD FOR NON-LINEAR LEAST SQUARES OPTIMIZATION DESCRIPTION: This function is used to find minimum of function which is represented as sum of squares: F(x) = f[0]^2(x[0],...,x[n-1]) + ... + f[m-1]^2(x[0],...,x[n-1]) using value of function vector f[] only. Finite differences are used to calculate Jacobian. REQUIREMENTS: This algorithm will request following information during its operation: * function vector f[] at given point X There are several overloaded versions of MinLMOptimize() function which correspond to different LM-like optimization algorithms provided by this unit. You should choose version which accepts fvec() callback. You can try to initialize MinLMState structure with VJ function and then use incorrect version of MinLMOptimize() (for example, version which works with general form function and does not accept function vector), but it will lead to exception being thrown after first attempt to calculate Jacobian. USAGE: 1. User initializes algorithm state with MinLMCreateV() call 2. User tunes solver parameters with MinLMSetCond(), MinLMSetStpMax() and other functions 3. User calls MinLMOptimize() function which takes algorithm state and callback functions. 4. User calls MinLMResults() to get solution 5. Optionally, user may call MinLMRestartFrom() to solve another problem with same N/M but another starting point and/or another function. MinLMRestartFrom() allows to reuse already initialized structure. INPUT PARAMETERS: N - dimension, N>1 * if given, only leading N elements of X are used * if not given, automatically determined from size of X M - number of functions f[i] X - initial solution, array[0..N-1] DiffStep- differentiation step, >0 OUTPUT PARAMETERS: State - structure which stores algorithm state See also MinLMIteration, MinLMResults. NOTES: 1. you may tune stopping conditions with MinLMSetCond() function 2. if target function contains exp() or other fast growing functions, and optimization algorithm makes too large steps which leads to overflow, use MinLMSetStpMax() function to bound algorithm's steps. -- ALGLIB -- Copyright 30.03.2009 by Bochkanov Sergey *************************************************************************/
public static void minlmcreatev( int m, double[] x, double diffstep, out minlmstate state) public static void minlmcreatev( int n, int m, double[] x, double diffstep, out minlmstate state)

Examples:   [1]  [2]  [3]  

/************************************************************************* This is obsolete function. Since ALGLIB 3.3 it is equivalent to MinLMCreateVJ(). -- ALGLIB -- Copyright 30.03.2009 by Bochkanov Sergey *************************************************************************/
public static void minlmcreatevgj(int m, double[] x, out minlmstate state) public static void minlmcreatevgj( int n, int m, double[] x, out minlmstate state)
/************************************************************************* IMPROVED LEVENBERG-MARQUARDT METHOD FOR NON-LINEAR LEAST SQUARES OPTIMIZATION DESCRIPTION: This function is used to find minimum of function which is represented as sum of squares: F(x) = f[0]^2(x[0],...,x[n-1]) + ... + f[m-1]^2(x[0],...,x[n-1]) using value of function vector f[] and Jacobian of f[]. REQUIREMENTS: This algorithm will request following information during its operation: * function vector f[] at given point X * function vector f[] and Jacobian of f[] (simultaneously) at given point There are several overloaded versions of MinLMOptimize() function which correspond to different LM-like optimization algorithms provided by this unit. You should choose version which accepts fvec() and jac() callbacks. First one is used to calculate f[] at given point, second one calculates f[] and Jacobian df[i]/dx[j]. You can try to initialize MinLMState structure with VJ function and then use incorrect version of MinLMOptimize() (for example, version which works with general form function and does not provide Jacobian), but it will lead to exception being thrown after first attempt to calculate Jacobian. USAGE: 1. User initializes algorithm state with MinLMCreateVJ() call 2. User tunes solver parameters with MinLMSetCond(), MinLMSetStpMax() and other functions 3. User calls MinLMOptimize() function which takes algorithm state and callback functions. 4. User calls MinLMResults() to get solution 5. Optionally, user may call MinLMRestartFrom() to solve another problem with same N/M but another starting point and/or another function. MinLMRestartFrom() allows to reuse already initialized structure. INPUT PARAMETERS: N - dimension, N>1 * if given, only leading N elements of X are used * if not given, automatically determined from size of X M - number of functions f[i] X - initial solution, array[0..N-1] OUTPUT PARAMETERS: State - structure which stores algorithm state NOTES: 1. you may tune stopping conditions with MinLMSetCond() function 2. if target function contains exp() or other fast growing functions, and optimization algorithm makes too large steps which leads to overflow, use MinLMSetStpMax() function to bound algorithm's steps. -- ALGLIB -- Copyright 30.03.2009 by Bochkanov Sergey *************************************************************************/
public static void minlmcreatevj(int m, double[] x, out minlmstate state) public static void minlmcreatevj( int n, int m, double[] x, out minlmstate state)

Examples:   [1]  

/************************************************************************* This family of functions is used to launcn iterations of nonlinear optimizer These functions accept following parameters: state - algorithm state func - callback which calculates function (or merit function) value func at given point x grad - callback which calculates function (or merit function) value func and gradient grad at given point x hess - callback which calculates function (or merit function) value func, gradient grad and Hessian hess at given point x fvec - callback which calculates function vector fi[] at given point x jac - callback which calculates function vector fi[] and Jacobian jac at given point x rep - optional callback which is called after each iteration can be NULL ptr - optional pointer which is passed to func/grad/hess/jac/rep can be NULL NOTES: 1. Depending on function used to create state structure, this algorithm may accept Jacobian and/or Hessian and/or gradient. According to the said above, there ase several versions of this function, which accept different sets of callbacks. This flexibility opens way to subtle errors - you may create state with MinLMCreateFGH() (optimization using Hessian), but call function which does not accept Hessian. So when algorithm will request Hessian, there will be no callback to call. In this case exception will be thrown. Be careful to avoid such errors because there is no way to find them at compile time - you can see them at runtime only. -- ALGLIB -- Copyright 10.03.2009 by Bochkanov Sergey *************************************************************************/
public static void minlmoptimize(minlmstate state, ndimensional_fvec fvec, ndimensional_rep rep, object obj) public static void minlmoptimize(minlmstate state, ndimensional_fvec fvec, ndimensional_jac jac, ndimensional_rep rep, object obj) public static void minlmoptimize(minlmstate state, ndimensional_func func, ndimensional_grad grad, ndimensional_hess hess, ndimensional_rep rep, object obj) public static void minlmoptimize(minlmstate state, ndimensional_func func, ndimensional_jac jac, ndimensional_rep rep, object obj) public static void minlmoptimize(minlmstate state, ndimensional_func func, ndimensional_grad grad, ndimensional_jac jac, ndimensional_rep rep, object obj)

Examples:   [1]  [2]  [3]  [4]  [5]  

/************************************************************************* This subroutine restarts LM algorithm from new point. All optimization parameters are left unchanged. This function allows to solve multiple optimization problems (which must have same number of dimensions) without object reallocation penalty. INPUT PARAMETERS: State - structure used for reverse communication previously allocated with MinLMCreateXXX call. X - new starting point. -- ALGLIB -- Copyright 30.07.2010 by Bochkanov Sergey *************************************************************************/
public static void minlmrestartfrom(minlmstate state, double[] x)

Examples:   [1]  

/************************************************************************* Levenberg-Marquardt algorithm results INPUT PARAMETERS: State - algorithm state OUTPUT PARAMETERS: X - array[0..N-1], solution Rep - optimization report; see comments for this structure for more info. -- ALGLIB -- Copyright 10.03.2009 by Bochkanov Sergey *************************************************************************/
public static void minlmresults( minlmstate state, out double[] x, out minlmreport rep)

Examples:   [1]  [2]  [3]  [4]  [5]  

/************************************************************************* Levenberg-Marquardt algorithm results Buffered implementation of MinLMResults(), which uses pre-allocated buffer to store X[]. If buffer size is too small, it resizes buffer. It is intended to be used in the inner cycles of performance critical algorithms where array reallocation penalty is too large to be ignored. -- ALGLIB -- Copyright 10.03.2009 by Bochkanov Sergey *************************************************************************/
public static void minlmresultsbuf( minlmstate state, ref double[] x, minlmreport rep)
/************************************************************************* This function is used to change acceleration settings You can choose between three acceleration strategies: * AccType=0, no acceleration. * AccType=1, secant updates are used to update quadratic model after each iteration. After fixed number of iterations (or after model breakdown) we recalculate quadratic model using analytic Jacobian or finite differences. Number of secant-based iterations depends on optimization settings: about 3 iterations - when we have analytic Jacobian, up to 2*N iterations - when we use finite differences to calculate Jacobian. AccType=1 is recommended when Jacobian calculation cost is prohibitive high (several Mx1 function vector calculations followed by several NxN Cholesky factorizations are faster than calculation of one M*N Jacobian). It should also be used when we have no Jacobian, because finite difference approximation takes too much time to compute. Table below list optimization protocols (XYZ protocol corresponds to MinLMCreateXYZ) and acceleration types they support (and use by default). ACCELERATION TYPES SUPPORTED BY OPTIMIZATION PROTOCOLS: protocol 0 1 comment V + + VJ + + FGH + DAFAULT VALUES: protocol 0 1 comment V x without acceleration it is so slooooooooow VJ x FGH x NOTE: this function should be called before optimization. Attempt to call it during algorithm iterations may result in unexpected behavior. NOTE: attempt to call this function with unsupported protocol/acceleration combination will result in exception being thrown. -- ALGLIB -- Copyright 14.10.2010 by Bochkanov Sergey *************************************************************************/
public static void minlmsetacctype(minlmstate state, int acctype)
/************************************************************************* This function sets boundary constraints for LM optimizer Boundary constraints are inactive by default (after initial creation). They are preserved until explicitly turned off with another SetBC() call. INPUT PARAMETERS: State - structure stores algorithm state BndL - lower bounds, array[N]. If some (all) variables are unbounded, you may specify very small number or -INF (latter is recommended because it will allow solver to use better algorithm). BndU - upper bounds, array[N]. If some (all) variables are unbounded, you may specify very large number or +INF (latter is recommended because it will allow solver to use better algorithm). NOTE 1: it is possible to specify BndL[i]=BndU[i]. In this case I-th variable will be "frozen" at X[i]=BndL[i]=BndU[i]. NOTE 2: this solver has following useful properties: * bound constraints are always satisfied exactly * function is evaluated only INSIDE area specified by bound constraints or at its boundary -- ALGLIB -- Copyright 14.01.2011 by Bochkanov Sergey *************************************************************************/
public static void minlmsetbc( minlmstate state, double[] bndl, double[] bndu)
/************************************************************************* This function sets stopping conditions for Levenberg-Marquardt optimization algorithm. INPUT PARAMETERS: State - structure which stores algorithm state EpsG - >=0 The subroutine finishes its work if the condition |v|<EpsG is satisfied, where: * |.| means Euclidian norm * v - scaled gradient vector, v[i]=g[i]*s[i] * g - gradient * s - scaling coefficients set by MinLMSetScale() EpsF - >=0 The subroutine finishes its work if on k+1-th iteration the condition |F(k+1)-F(k)|<=EpsF*max{|F(k)|,|F(k+1)|,1} is satisfied. EpsX - >=0 The subroutine finishes its work if on k+1-th iteration the condition |v|<=EpsX is fulfilled, where: * |.| means Euclidian norm * v - scaled step vector, v[i]=dx[i]/s[i] * dx - ste pvector, dx=X(k+1)-X(k) * s - scaling coefficients set by MinLMSetScale() MaxIts - maximum number of iterations. If MaxIts=0, the number of iterations is unlimited. Only Levenberg-Marquardt iterations are counted (L-BFGS/CG iterations are NOT counted because their cost is very low compared to that of LM). Passing EpsG=0, EpsF=0, EpsX=0 and MaxIts=0 (simultaneously) will lead to automatic stopping criterion selection (small EpsX). -- ALGLIB -- Copyright 02.04.2010 by Bochkanov Sergey *************************************************************************/
public static void minlmsetcond( minlmstate state, double epsg, double epsf, double epsx, int maxits)

Examples:   [1]  [2]  [3]  [4]  [5]  

/************************************************************************* This function sets scaling coefficients for LM optimizer. ALGLIB optimizers use scaling matrices to test stopping conditions (step size and gradient are scaled before comparison with tolerances). Scale of the I-th variable is a translation invariant measure of: a) "how large" the variable is b) how large the step should be to make significant changes in the function Generally, scale is NOT considered to be a form of preconditioner. But LM optimizer is unique in that it uses scaling matrix both in the stopping condition tests and as Marquardt damping factor. Proper scaling is very important for the algorithm performance. It is less important for the quality of results, but still has some influence (it is easier to converge when variables are properly scaled, so premature stopping is possible when very badly scalled variables are combined with relaxed stopping conditions). INPUT PARAMETERS: State - structure stores algorithm state S - array[N], non-zero scaling coefficients S[i] may be negative, sign doesn't matter. -- ALGLIB -- Copyright 14.01.2011 by Bochkanov Sergey *************************************************************************/
public static void minlmsetscale(minlmstate state, double[] s)
/************************************************************************* This function sets maximum step length INPUT PARAMETERS: State - structure which stores algorithm state StpMax - maximum step length, >=0. Set StpMax to 0.0, if you don't want to limit step length. Use this subroutine when you optimize target function which contains exp() or other fast growing functions, and optimization algorithm makes too large steps which leads to overflow. This function allows us to reject steps that are too large (and therefore expose us to the possible overflow) without actually calculating function value at the x+stp*d. NOTE: non-zero StpMax leads to moderate performance degradation because intermediate step of preconditioned L-BFGS optimization is incompatible with limits on step size. -- ALGLIB -- Copyright 02.04.2010 by Bochkanov Sergey *************************************************************************/
public static void minlmsetstpmax(minlmstate state, double stpmax)
/************************************************************************* This function turns on/off reporting. INPUT PARAMETERS: State - structure which stores algorithm state NeedXRep- whether iteration reports are needed or not If NeedXRep is True, algorithm will call rep() callback function if it is provided to MinLMOptimize(). Both Levenberg-Marquardt and internal L-BFGS iterations are reported. -- ALGLIB -- Copyright 02.04.2010 by Bochkanov Sergey *************************************************************************/
public static void minlmsetxrep(minlmstate state, bool needxrep)
public static void function1_func(double[] x, ref double func, object obj)
{
    // this callback calculates f(x0,x1) = 100*(x0+3)^4 + (x1-3)^4
    func = 100*System.Math.Pow(x[0]+3,4) + System.Math.Pow(x[1]-3,4);
}
public static void function1_grad(double[] x, ref double func, double[] grad, object obj)
{
    // this callback calculates f(x0,x1) = 100*(x0+3)^4 + (x1-3)^4
    // and its derivatives df/d0 and df/dx1
    func = 100*System.Math.Pow(x[0]+3,4) + System.Math.Pow(x[1]-3,4);
    grad[0] = 400*System.Math.Pow(x[0]+3,3);
    grad[1] = 4*System.Math.Pow(x[1]-3,3);
}
public static void function1_hess(double[] x, ref double func, double[] grad, double[,] hess, object obj)
{
    // this callback calculates f(x0,x1) = 100*(x0+3)^4 + (x1-3)^4
    // its derivatives df/d0 and df/dx1
    // and its Hessian.
    func = 100*System.Math.Pow(x[0]+3,4) + System.Math.Pow(x[1]-3,4);
    grad[0] = 400*System.Math.Pow(x[0]+3,3);
    grad[1] = 4*System.Math.Pow(x[1]-3,3);
    hess[0,0] = 1200*System.Math.Pow(x[0]+3,2);
    hess[0,1] = 0;
    hess[1,0] = 0;
    hess[1,1] = 12*System.Math.Pow(x[1]-3,2);
}
public static int Main(string[] args)
{
    //
    // This example demonstrates minimization of F(x0,x1) = 100*(x0+3)^4+(x1-3)^4
    // using "FGH" mode of the Levenberg-Marquardt optimizer.
    //
    // F is treated like a monolitic function without internal structure,
    // i.e. we do NOT represent it as a sum of squares.
    //
    // Optimization algorithm uses:
    // * function value F(x0,x1)
    // * gradient G={dF/dxi}
    // * Hessian H={d2F/(dxi*dxj)}
    //
    double[] x = new double[]{0,0};
    double epsg = 0.0000000001;
    double epsf = 0;
    double epsx = 0;
    int maxits = 0;
    alglib.minlmstate state;
    alglib.minlmreport rep;

    alglib.minlmcreatefgh(x, out state);
    alglib.minlmsetcond(state, epsg, epsf, epsx, maxits);
    alglib.minlmoptimize(state, function1_func, function1_grad, function1_hess, null, null);
    alglib.minlmresults(state, out x, out rep);

    System.Console.WriteLine("{0}", rep.terminationtype); // EXPECTED: 4
    System.Console.WriteLine("{0}", alglib.ap.format(x,2)); // EXPECTED: [-3,+3]
    System.Console.ReadLine();
    return 0;
}


public static void  function1_fvec(double[] x, double[] fi, object obj)
{
    //
    // this callback calculates
    // f0(x0,x1) = 100*(x0+3)^4,
    // f1(x0,x1) = (x1-3)^4
    //
    fi[0] = 10*System.Math.Pow(x[0]+3,2);
    fi[1] = System.Math.Pow(x[1]-3,2);
}
public static void  function2_fvec(double[] x, double[] fi, object obj)
{
    //
    // this callback calculates
    // f0(x0,x1) = 100*(x0+3)^4,
    // f1(x0,x1) = (x1-3)^4
    //
    fi[0] = x[0]*x[0]+1;
    fi[1] = x[1]-1;
}
public static int Main(string[] args)
{
    //
    // This example demonstrates minimization of F(x0,x1) = f0^2+f1^2, where 
    //
    //     f0(x0,x1) = 10*(x0+3)^2
    //     f1(x0,x1) = (x1-3)^2
    //
    // using several starting points and efficient restarts.
    //
    double[] x;
    double epsg = 0.0000000001;
    double epsf = 0;
    double epsx = 0;
    int maxits = 0;
    alglib.minlmstate state;
    alglib.minlmreport rep;

    //
    // create optimizer using minlmcreatev()
    //
    x = new double[]{10,10};
    alglib.minlmcreatev(2, x, 0.0001, out state);
    alglib.minlmsetcond(state, epsg, epsf, epsx, maxits);
    alglib.minlmoptimize(state, function1_fvec, null, null);
    alglib.minlmresults(state, out x, out rep);
    System.Console.WriteLine("{0}", alglib.ap.format(x,2)); // EXPECTED: [-3,+3]

    //
    // restart optimizer using minlmrestartfrom()
    //
    // we can use different starting point, different function,
    // different stopping conditions, but problem size
    // must remain unchanged.
    //
    x = new double[]{4,4};
    alglib.minlmrestartfrom(state, x);
    alglib.minlmoptimize(state, function2_fvec, null, null);
    alglib.minlmresults(state, out x, out rep);
    System.Console.WriteLine("{0}", alglib.ap.format(x,2)); // EXPECTED: [0,1]
    System.Console.ReadLine();
    return 0;
}


public static void  function1_fvec(double[] x, double[] fi, object obj)
{
    //
    // this callback calculates
    // f0(x0,x1) = 100*(x0+3)^4,
    // f1(x0,x1) = (x1-3)^4
    //
    fi[0] = 10*System.Math.Pow(x[0]+3,2);
    fi[1] = System.Math.Pow(x[1]-3,2);
}
public static int Main(string[] args)
{
    //
    // This example demonstrates minimization of F(x0,x1) = f0^2+f1^2, where 
    //
    //     f0(x0,x1) = 10*(x0+3)^2
    //     f1(x0,x1) = (x1-3)^2
    //
    // using "V" mode of the Levenberg-Marquardt optimizer.
    //
    // Optimization algorithm uses:
    // * function vector f[] = {f1,f2}
    //
    // No other information (Jacobian, gradient, etc.) is needed.
    //
    double[] x = new double[]{0,0};
    double epsg = 0.0000000001;
    double epsf = 0;
    double epsx = 0;
    int maxits = 0;
    alglib.minlmstate state;
    alglib.minlmreport rep;

    alglib.minlmcreatev(2, x, 0.0001, out state);
    alglib.minlmsetcond(state, epsg, epsf, epsx, maxits);
    alglib.minlmoptimize(state, function1_fvec, null, null);
    alglib.minlmresults(state, out x, out rep);

    System.Console.WriteLine("{0}", rep.terminationtype); // EXPECTED: 4
    System.Console.WriteLine("{0}", alglib.ap.format(x,2)); // EXPECTED: [-3,+3]
    System.Console.ReadLine();
    return 0;
}


public static void  function1_fvec(double[] x, double[] fi, object obj)
{
    //
    // this callback calculates
    // f0(x0,x1) = 100*(x0+3)^4,
    // f1(x0,x1) = (x1-3)^4
    //
    fi[0] = 10*System.Math.Pow(x[0]+3,2);
    fi[1] = System.Math.Pow(x[1]-3,2);
}
public static int Main(string[] args)
{
    //
    // This example demonstrates minimization of F(x0,x1) = f0^2+f1^2, where 
    //
    //     f0(x0,x1) = 10*(x0+3)^2
    //     f1(x0,x1) = (x1-3)^2
    //
    // with boundary constraints
    //
    //     -1 <= x0 <= +1
    //     -1 <= x1 <= +1
    //
    // using "V" mode of the Levenberg-Marquardt optimizer.
    //
    // Optimization algorithm uses:
    // * function vector f[] = {f1,f2}
    //
    // No other information (Jacobian, gradient, etc.) is needed.
    //
    double[] x = new double[]{0,0};
    double[] bndl = new double[]{-1,-1};
    double[] bndu = new double[]{+1,+1};
    double epsg = 0.0000000001;
    double epsf = 0;
    double epsx = 0;
    int maxits = 0;
    alglib.minlmstate state;
    alglib.minlmreport rep;

    alglib.minlmcreatev(2, x, 0.0001, out state);
    alglib.minlmsetbc(state, bndl, bndu);
    alglib.minlmsetcond(state, epsg, epsf, epsx, maxits);
    alglib.minlmoptimize(state, function1_fvec, null, null);
    alglib.minlmresults(state, out x, out rep);

    System.Console.WriteLine("{0}", rep.terminationtype); // EXPECTED: 4
    System.Console.WriteLine("{0}", alglib.ap.format(x,2)); // EXPECTED: [-1,+1]
    System.Console.ReadLine();
    return 0;
}


public static void  function1_fvec(double[] x, double[] fi, object obj)
{
    //
    // this callback calculates
    // f0(x0,x1) = 100*(x0+3)^4,
    // f1(x0,x1) = (x1-3)^4
    //
    fi[0] = 10*System.Math.Pow(x[0]+3,2);
    fi[1] = System.Math.Pow(x[1]-3,2);
}
public static void  function1_jac(double[] x, double[] fi, double[,] jac, object obj)
{
    // this callback calculates
    // f0(x0,x1) = 100*(x0+3)^4,
    // f1(x0,x1) = (x1-3)^4
    // and Jacobian matrix J = [dfi/dxj]
    fi[0] = 10*System.Math.Pow(x[0]+3,2);
    fi[1] = System.Math.Pow(x[1]-3,2);
    jac[0,0] = 20*(x[0]+3);
    jac[0,1] = 0;
    jac[1,0] = 0;
    jac[1,1] = 2*(x[1]-3);
}
public static int Main(string[] args)
{
    //
    // This example demonstrates minimization of F(x0,x1) = f0^2+f1^2, where 
    //
    //     f0(x0,x1) = 10*(x0+3)^2
    //     f1(x0,x1) = (x1-3)^2
    //
    // using "VJ" mode of the Levenberg-Marquardt optimizer.
    //
    // Optimization algorithm uses:
    // * function vector f[] = {f1,f2}
    // * Jacobian matrix J = {dfi/dxj}.
    //
    double[] x = new double[]{0,0};
    double epsg = 0.0000000001;
    double epsf = 0;
    double epsx = 0;
    int maxits = 0;
    alglib.minlmstate state;
    alglib.minlmreport rep;

    alglib.minlmcreatevj(2, x, out state);
    alglib.minlmsetcond(state, epsg, epsf, epsx, maxits);
    alglib.minlmoptimize(state, function1_fvec, function1_jac, null, null);
    alglib.minlmresults(state, out x, out rep);

    System.Console.WriteLine("{0}", rep.terminationtype); // EXPECTED: 4
    System.Console.WriteLine("{0}", alglib.ap.format(x,2)); // EXPECTED: [-3,+3]
    System.Console.ReadLine();
    return 0;
}


minqpreport
minqpstate
minqpcreate
minqpoptimize
minqpresults
minqpresultsbuf
minqpsetalgocholesky
minqpsetbc
minqpsetlinearterm
minqpsetorigin
minqpsetquadraticterm
minqpsetstartingpoint
minqp_d_bc1 Constrained dense quadratic programming
minqp_d_u1 Unconstrained dense quadratic programming
/************************************************************************* This structure stores optimization report: * InnerIterationsCount number of inner iterations * OuterIterationsCount number of outer iterations * NCholesky number of Cholesky decomposition * NMV number of matrix-vector products (only products calculated as part of iterative process are counted) * TerminationType completion code (see below) Completion codes: * -5 inappropriate solver was used: * Cholesky solver for semidefinite or indefinite problems * Cholesky solver for problems with non-boundary constraints * -3 inconsistent constraints (or, maybe, feasible point is too hard to find). If you are sure that constraints are feasible, try to restart optimizer with better initial approximation. * 4 successful completion * 5 MaxIts steps was taken * 7 stopping conditions are too stringent, further improvement is impossible, X contains best point found so far. *************************************************************************/
public class minqpreport { public int inneriterationscount; public int outeriterationscount; public int nmv; public int ncholesky; public int terminationtype; }
/************************************************************************* This object stores nonlinear optimizer state. You should use functions provided by MinQP subpackage to work with this object *************************************************************************/
public class minqpstate { }
/************************************************************************* CONSTRAINED QUADRATIC PROGRAMMING The subroutine creates QP optimizer. After initial creation, it contains default optimization problem with zero quadratic and linear terms and no constraints. You should set quadratic/linear terms with calls to functions provided by MinQP subpackage. INPUT PARAMETERS: N - problem size OUTPUT PARAMETERS: State - optimizer with zero quadratic/linear terms and no constraints -- ALGLIB -- Copyright 11.01.2011 by Bochkanov Sergey *************************************************************************/
public static void minqpcreate(int n, out minqpstate state)

Examples:   [1]  [2]  

/************************************************************************* This function solves quadratic programming problem. You should call it after setting solver options with MinQPSet...() calls. INPUT PARAMETERS: State - algorithm state You should use MinQPResults() function to access results after calls to this function. -- ALGLIB -- Copyright 11.01.2011 by Bochkanov Sergey *************************************************************************/
public static void minqpoptimize(minqpstate state)

Examples:   [1]  [2]  

/************************************************************************* QP solver results INPUT PARAMETERS: State - algorithm state OUTPUT PARAMETERS: X - array[0..N-1], solution Rep - optimization report. You should check Rep.TerminationType, which contains completion code, and you may check another fields which contain another information about algorithm functioning. -- ALGLIB -- Copyright 11.01.2011 by Bochkanov Sergey *************************************************************************/
public static void minqpresults( minqpstate state, out double[] x, out minqpreport rep)

Examples:   [1]  [2]  

/************************************************************************* QP results Buffered implementation of MinQPResults() which uses pre-allocated buffer to store X[]. If buffer size is too small, it resizes buffer. It is intended to be used in the inner cycles of performance critical algorithms where array reallocation penalty is too large to be ignored. -- ALGLIB -- Copyright 11.01.2011 by Bochkanov Sergey *************************************************************************/
public static void minqpresultsbuf( minqpstate state, ref double[] x, minqpreport rep)
/************************************************************************* This function tells solver to use Cholesky-based algorithm. Cholesky-based algorithm can be used when: * problem is convex * there is no constraints or only boundary constraints are present This algorithm has O(N^3) complexity for unconstrained problem and is up to several times slower on bound constrained problems (these additional iterations are needed to identify active constraints). INPUT PARAMETERS: State - structure which stores algorithm state -- ALGLIB -- Copyright 11.01.2011 by Bochkanov Sergey *************************************************************************/
public static void minqpsetalgocholesky(minqpstate state)
/************************************************************************* This function sets boundary constraints for QP solver Boundary constraints are inactive by default (after initial creation). After being set, they are preserved until explicitly turned off with another SetBC() call. INPUT PARAMETERS: State - structure stores algorithm state BndL - lower bounds, array[N]. If some (all) variables are unbounded, you may specify very small number or -INF (latter is recommended because it will allow solver to use better algorithm). BndU - upper bounds, array[N]. If some (all) variables are unbounded, you may specify very large number or +INF (latter is recommended because it will allow solver to use better algorithm). NOTE: it is possible to specify BndL[i]=BndU[i]. In this case I-th variable will be "frozen" at X[i]=BndL[i]=BndU[i]. -- ALGLIB -- Copyright 11.01.2011 by Bochkanov Sergey *************************************************************************/
public static void minqpsetbc( minqpstate state, double[] bndl, double[] bndu)
/************************************************************************* This function sets linear term for QP solver. By default, linear term is zero. INPUT PARAMETERS: State - structure which stores algorithm state B - linear term, array[N]. -- ALGLIB -- Copyright 11.01.2011 by Bochkanov Sergey *************************************************************************/
public static void minqpsetlinearterm(minqpstate state, double[] b)

Examples:   [1]  [2]  

/************************************************************************* This function sets origin for QP solver. By default, following QP program is solved: min(0.5*x'*A*x+b'*x) This function allows to solve different problem: min(0.5*(x-x_origin)'*A*(x-x_origin)+b'*(x-x_origin)) INPUT PARAMETERS: State - structure which stores algorithm state XOrigin - origin, array[N]. -- ALGLIB -- Copyright 11.01.2011 by Bochkanov Sergey *************************************************************************/
public static void minqpsetorigin(minqpstate state, double[] xorigin)
/************************************************************************* This function sets quadratic term for QP solver. By default quadratic term is zero. IMPORTANT: this solver minimizes following function: f(x) = 0.5*x'*A*x + b'*x. Note that quadratic term has 0.5 before it. So if you want to minimize f(x) = x^2 + x you should rewrite your problem as follows: f(x) = 0.5*(2*x^2) + x and your matrix A will be equal to [[2.0]], not to [[1.0]] INPUT PARAMETERS: State - structure which stores algorithm state A - matrix, array[N,N] IsUpper - (optional) storage type: * if True, symmetric matrix A is given by its upper triangle, and the lower triangle isn’t used * if False, symmetric matrix A is given by its lower triangle, and the upper triangle isn’t used * if not given, both lower and upper triangles must be filled. -- ALGLIB -- Copyright 11.01.2011 by Bochkanov Sergey *************************************************************************/
public static void minqpsetquadraticterm(minqpstate state, double[,] a) public static void minqpsetquadraticterm( minqpstate state, double[,] a, bool isupper)

Examples:   [1]  [2]  

/************************************************************************* This function sets starting point for QP solver. It is useful to have good initial approximation to the solution, because it will increase speed of convergence and identification of active constraints. INPUT PARAMETERS: State - structure which stores algorithm state X - starting point, array[N]. -- ALGLIB -- Copyright 11.01.2011 by Bochkanov Sergey *************************************************************************/
public static void minqpsetstartingpoint(minqpstate state, double[] x)

Examples:   [1]  [2]  


public static int Main(string[] args)
{
    //
    // This example demonstrates minimization of F(x0,x1) = x0^2 + x1^2 -6*x0 - 4*x1
    // subject to bound constraints 0<=x0<=2.5, 0<=x1<=2.5
    //
    // Exact solution is [x0,x1] = [2.5,2]
    //
    // We provide algorithm with starting point. With such small problem good starting
    // point is not really necessary, but with high-dimensional problem it can save us
    // a lot of time.
    //
    // IMPORTANT: this solver minimizes  following  function:
    //     f(x) = 0.5*x'*A*x + b'*x.
    // Note that quadratic term has 0.5 before it. So if you want to minimize
    // quadratic function, you should rewrite it in such way that quadratic term
    // is multiplied by 0.5 too.
    // For example, our function is f(x)=x0^2+x1^2+..., but we rewrite it as 
    //     f(x) = 0.5*(2*x0^2+2*x1^2) + ....
    // and pass diag(2,2) as quadratic term - NOT diag(1,1)!
    //
    double[,] a = new double[,]{{2,0},{0,2}};
    double[] b = new double[]{-6,-4};
    double[] x0 = new double[]{0,1};
    double[] bndl = new double[]{0.0,0.0};
    double[] bndu = new double[]{2.5,2.5};
    double[] x;
    alglib.minqpstate state;
    alglib.minqpreport rep;

    alglib.minqpcreate(2, out state);
    alglib.minqpsetquadraticterm(state, a);
    alglib.minqpsetlinearterm(state, b);
    alglib.minqpsetstartingpoint(state, x0);
    alglib.minqpsetbc(state, bndl, bndu);
    alglib.minqpoptimize(state);
    alglib.minqpresults(state, out x, out rep);

    System.Console.WriteLine("{0}", rep.terminationtype); // EXPECTED: 4
    System.Console.WriteLine("{0}", alglib.ap.format(x,2)); // EXPECTED: [2.5,2]
    System.Console.ReadLine();
    return 0;
}



public static int Main(string[] args)
{
    //
    // This example demonstrates minimization of F(x0,x1) = x0^2 + x1^2 -6*x0 - 4*x1
    //
    // Exact solution is [x0,x1] = [3,2]
    //
    // We provide algorithm with starting point, although in this case
    // (dense matrix, no constraints) it can work without such information.
    //
    // IMPORTANT: this solver minimizes  following  function:
    //     f(x) = 0.5*x'*A*x + b'*x.
    // Note that quadratic term has 0.5 before it. So if you want to minimize
    // quadratic function, you should rewrite it in such way that quadratic term
    // is multiplied by 0.5 too.
    // For example, our function is f(x)=x0^2+x1^2+..., but we rewrite it as 
    //     f(x) = 0.5*(2*x0^2+2*x1^2) + ....
    // and pass diag(2,2) as quadratic term - NOT diag(1,1)!
    //
    double[,] a = new double[,]{{2,0},{0,2}};
    double[] b = new double[]{-6,-4};
    double[] x0 = new double[]{0,1};
    double[] x;
    alglib.minqpstate state;
    alglib.minqpreport rep;

    alglib.minqpcreate(2, out state);
    alglib.minqpsetquadraticterm(state, a);
    alglib.minqpsetlinearterm(state, b);
    alglib.minqpsetstartingpoint(state, x0);
    alglib.minqpoptimize(state);
    alglib.minqpresults(state, out x, out rep);

    System.Console.WriteLine("{0}", rep.terminationtype); // EXPECTED: 4
    System.Console.WriteLine("{0}", alglib.ap.format(x,2)); // EXPECTED: [3,2]
    System.Console.ReadLine();
    return 0;
}


multilayerperceptron
mlpactivationfunction
mlpavgce
mlpavgerror
mlpavgrelerror
mlpclserror
mlpcreate0
mlpcreate1
mlpcreate2
mlpcreateb0
mlpcreateb1
mlpcreateb2
mlpcreatec0
mlpcreatec1
mlpcreatec2
mlpcreater0
mlpcreater1
mlpcreater2
mlperror
mlperrorn
mlpgetinputscaling
mlpgetlayerscount
mlpgetlayersize
mlpgetneuroninfo
mlpgetoutputscaling
mlpgetweight
mlpgrad
mlpgradbatch
mlpgradn
mlpgradnbatch
mlphessianbatch
mlphessiannbatch
mlpissoftmax
mlpprocess
mlpprocessi
mlpproperties
mlprandomize
mlprandomizefull
mlprelclserror
mlprmserror
mlpserialize
mlpsetinputscaling
mlpsetneuroninfo
mlpsetoutputscaling
mlpsetweight
mlpunserialize
/************************************************************************* *************************************************************************/
public class multilayerperceptron { }
/************************************************************************* Neural network activation function INPUT PARAMETERS: NET - neuron input K - function index (zero for linear function) OUTPUT PARAMETERS: F - function DF - its derivative D2F - its second derivative -- ALGLIB -- Copyright 04.11.2007 by Bochkanov Sergey *************************************************************************/
public static void mlpactivationfunction( double net, int k, out double f, out double df, out double d2f)
/************************************************************************* Average cross-entropy (in bits per element) on the test set INPUT PARAMETERS: Network - neural network XY - test set NPoints - test set size RESULT: CrossEntropy/(NPoints*LN(2)). Zero if network solves regression task. -- ALGLIB -- Copyright 08.01.2009 by Bochkanov Sergey *************************************************************************/
public static double mlpavgce( multilayerperceptron network, double[,] xy, int npoints)
/************************************************************************* Average error on the test set INPUT PARAMETERS: Network - neural network XY - test set NPoints - test set size RESULT: Its meaning for regression task is obvious. As for classification task, it means average error when estimating posterior probabilities. -- ALGLIB -- Copyright 11.03.2008 by Bochkanov Sergey *************************************************************************/
public static double mlpavgerror( multilayerperceptron network, double[,] xy, int npoints)
/************************************************************************* Average relative error on the test set INPUT PARAMETERS: Network - neural network XY - test set NPoints - test set size RESULT: Its meaning for regression task is obvious. As for classification task, it means average relative error when estimating posterior probability of belonging to the correct class. -- ALGLIB -- Copyright 11.03.2008 by Bochkanov Sergey *************************************************************************/
public static double mlpavgrelerror( multilayerperceptron network, double[,] xy, int npoints)
/************************************************************************* Classification error -- ALGLIB -- Copyright 04.11.2007 by Bochkanov Sergey *************************************************************************/
public static int mlpclserror( multilayerperceptron network, double[,] xy, int ssize)
/************************************************************************* Creates neural network with NIn inputs, NOut outputs, without hidden layers, with linear output layer. Network weights are filled with small random values. -- ALGLIB -- Copyright 04.11.2007 by Bochkanov Sergey *************************************************************************/
public static void mlpcreate0( int nin, int nout, out multilayerperceptron network)
/************************************************************************* Same as MLPCreate0, but with one hidden layer (NHid neurons) with non-linear activation function. Output layer is linear. -- ALGLIB -- Copyright 04.11.2007 by Bochkanov Sergey *************************************************************************/
public static void mlpcreate1( int nin, int nhid, int nout, out multilayerperceptron network)
/************************************************************************* Same as MLPCreate0, but with two hidden layers (NHid1 and NHid2 neurons) with non-linear activation function. Output layer is linear. $ALL -- ALGLIB -- Copyright 04.11.2007 by Bochkanov Sergey *************************************************************************/
public static void mlpcreate2( int nin, int nhid1, int nhid2, int nout, out multilayerperceptron network)
/************************************************************************* Creates neural network with NIn inputs, NOut outputs, without hidden layers with non-linear output layer. Network weights are filled with small random values. Activation function of the output layer takes values: (B, +INF), if D>=0 or (-INF, B), if D<0. -- ALGLIB -- Copyright 30.03.2008 by Bochkanov Sergey *************************************************************************/
public static void mlpcreateb0( int nin, int nout, double b, double d, out multilayerperceptron network)
/************************************************************************* Same as MLPCreateB0 but with non-linear hidden layer. -- ALGLIB -- Copyright 30.03.2008 by Bochkanov Sergey *************************************************************************/
public static void mlpcreateb1( int nin, int nhid, int nout, double b, double d, out multilayerperceptron network)
/************************************************************************* Same as MLPCreateB0 but with two non-linear hidden layers. -- ALGLIB -- Copyright 30.03.2008 by Bochkanov Sergey *************************************************************************/
public static void mlpcreateb2( int nin, int nhid1, int nhid2, int nout, double b, double d, out multilayerperceptron network)
/************************************************************************* Creates classifier network with NIn inputs and NOut possible classes. Network contains no hidden layers and linear output layer with SOFTMAX- normalization (so outputs sums up to 1.0 and converge to posterior probabilities). -- ALGLIB -- Copyright 04.11.2007 by Bochkanov Sergey *************************************************************************/
public static void mlpcreatec0( int nin, int nout, out multilayerperceptron network)
/************************************************************************* Same as MLPCreateC0, but with one non-linear hidden layer. -- ALGLIB -- Copyright 04.11.2007 by Bochkanov Sergey *************************************************************************/
public static void mlpcreatec1( int nin, int nhid, int nout, out multilayerperceptron network)
/************************************************************************* Same as MLPCreateC0, but with two non-linear hidden layers. -- ALGLIB -- Copyright 04.11.2007 by Bochkanov Sergey *************************************************************************/
public static void mlpcreatec2( int nin, int nhid1, int nhid2, int nout, out multilayerperceptron network)
/************************************************************************* Creates neural network with NIn inputs, NOut outputs, without hidden layers with non-linear output layer. Network weights are filled with small random values. Activation function of the output layer takes values [A,B]. -- ALGLIB -- Copyright 30.03.2008 by Bochkanov Sergey *************************************************************************/
public static void mlpcreater0( int nin, int nout, double a, double b, out multilayerperceptron network)
/************************************************************************* Same as MLPCreateR0, but with non-linear hidden layer. -- ALGLIB -- Copyright 30.03.2008 by Bochkanov Sergey *************************************************************************/
public static void mlpcreater1( int nin, int nhid, int nout, double a, double b, out multilayerperceptron network)
/************************************************************************* Same as MLPCreateR0, but with two non-linear hidden layers. -- ALGLIB -- Copyright 30.03.2008 by Bochkanov Sergey *************************************************************************/
public static void mlpcreater2( int nin, int nhid1, int nhid2, int nout, double a, double b, out multilayerperceptron network)
/************************************************************************* Error function for neural network, internal subroutine. -- ALGLIB -- Copyright 04.11.2007 by Bochkanov Sergey *************************************************************************/
public static double mlperror( multilayerperceptron network, double[,] xy, int ssize)
/************************************************************************* Natural error function for neural network, internal subroutine. -- ALGLIB -- Copyright 04.11.2007 by Bochkanov Sergey *************************************************************************/
public static double mlperrorn( multilayerperceptron network, double[,] xy, int ssize)
/************************************************************************* This function returns offset/scaling coefficients for I-th input of the network. INPUT PARAMETERS: Network - network I - input index OUTPUT PARAMETERS: Mean - mean term Sigma - sigma term, guaranteed to be nonzero. I-th input is passed through linear transformation IN[i] = (IN[i]-Mean)/Sigma before feeding to the network -- ALGLIB -- Copyright 25.03.2011 by Bochkanov Sergey *************************************************************************/
public static void mlpgetinputscaling( multilayerperceptron network, int i, out double mean, out double sigma)
/************************************************************************* This function returns total number of layers (including input, hidden and output layers). -- ALGLIB -- Copyright 25.03.2011 by Bochkanov Sergey *************************************************************************/
public static int mlpgetlayerscount(multilayerperceptron network)
/************************************************************************* This function returns size of K-th layer. K=0 corresponds to input layer, K=CNT-1 corresponds to output layer. Size of the output layer is always equal to the number of outputs, although when we have softmax-normalized network, last neuron doesn't have any connections - it is just zero. -- ALGLIB -- Copyright 25.03.2011 by Bochkanov Sergey *************************************************************************/
public static int mlpgetlayersize(multilayerperceptron network, int k)
/************************************************************************* This function returns information about Ith neuron of Kth layer INPUT PARAMETERS: Network - network K - layer index I - neuron index (within layer) OUTPUT PARAMETERS: FKind - activation function type (used by MLPActivationFunction()) this value is zero for input or linear neurons Threshold - also called offset, bias zero for input neurons NOTE: this function throws exception if layer or neuron with given index do not exists. -- ALGLIB -- Copyright 25.03.2011 by Bochkanov Sergey *************************************************************************/
public static void mlpgetneuroninfo( multilayerperceptron network, int k, int i, out int fkind, out double threshold)
/************************************************************************* This function returns offset/scaling coefficients for I-th output of the network. INPUT PARAMETERS: Network - network I - input index OUTPUT PARAMETERS: Mean - mean term Sigma - sigma term, guaranteed to be nonzero. I-th output is passed through linear transformation OUT[i] = OUT[i]*Sigma+Mean before returning it to user. In case we have SOFTMAX-normalized network, we return (Mean,Sigma)=(0.0,1.0). -- ALGLIB -- Copyright 25.03.2011 by Bochkanov Sergey *************************************************************************/
public static void mlpgetoutputscaling( multilayerperceptron network, int i, out double mean, out double sigma)
/************************************************************************* This function returns information about connection from I0-th neuron of K0-th layer to I1-th neuron of K1-th layer. INPUT PARAMETERS: Network - network K0 - layer index I0 - neuron index (within layer) K1 - layer index I1 - neuron index (within layer) RESULT: connection weight (zero for non-existent connections) This function: 1. throws exception if layer or neuron with given index do not exists. 2. returns zero if neurons exist, but there is no connection between them -- ALGLIB -- Copyright 25.03.2011 by Bochkanov Sergey *************************************************************************/
public static double mlpgetweight( multilayerperceptron network, int k0, int i0, int k1, int i1)
/************************************************************************* Gradient calculation INPUT PARAMETERS: Network - network initialized with one of the network creation funcs X - input vector, length of array must be at least NIn DesiredY- desired outputs, length of array must be at least NOut Grad - possibly preallocated array. If size of array is smaller than WCount, it will be reallocated. It is recommended to reuse previously allocated array to reduce allocation overhead. OUTPUT PARAMETERS: E - error function, SUM(sqr(y[i]-desiredy[i])/2,i) Grad - gradient of E with respect to weights of network, array[WCount] -- ALGLIB -- Copyright 04.11.2007 by Bochkanov Sergey *************************************************************************/
public static void mlpgrad( multilayerperceptron network, double[] x, double[] desiredy, out double e, ref double[] grad)
/************************************************************************* Batch gradient calculation for a set of inputs/outputs INPUT PARAMETERS: Network - network initialized with one of the network creation funcs XY - set of inputs/outputs; one sample = one row; first NIn columns contain inputs, next NOut columns - desired outputs. SSize - number of elements in XY Grad - possibly preallocated array. If size of array is smaller than WCount, it will be reallocated. It is recommended to reuse previously allocated array to reduce allocation overhead. OUTPUT PARAMETERS: E - error function, SUM(sqr(y[i]-desiredy[i])/2,i) Grad - gradient of E with respect to weights of network, array[WCount] -- ALGLIB -- Copyright 04.11.2007 by Bochkanov Sergey *************************************************************************/
public static void mlpgradbatch( multilayerperceptron network, double[,] xy, int ssize, out double e, ref double[] grad)
/************************************************************************* Gradient calculation (natural error function is used) INPUT PARAMETERS: Network - network initialized with one of the network creation funcs X - input vector, length of array must be at least NIn DesiredY- desired outputs, length of array must be at least NOut Grad - possibly preallocated array. If size of array is smaller than WCount, it will be reallocated. It is recommended to reuse previously allocated array to reduce allocation overhead. OUTPUT PARAMETERS: E - error function, sum-of-squares for regression networks, cross-entropy for classification networks. Grad - gradient of E with respect to weights of network, array[WCount] -- ALGLIB -- Copyright 04.11.2007 by Bochkanov Sergey *************************************************************************/
public static void mlpgradn( multilayerperceptron network, double[] x, double[] desiredy, out double e, ref double[] grad)
/************************************************************************* Batch gradient calculation for a set of inputs/outputs (natural error function is used) INPUT PARAMETERS: Network - network initialized with one of the network creation funcs XY - set of inputs/outputs; one sample = one row; first NIn columns contain inputs, next NOut columns - desired outputs. SSize - number of elements in XY Grad - possibly preallocated array. If size of array is smaller than WCount, it will be reallocated. It is recommended to reuse previously allocated array to reduce allocation overhead. OUTPUT PARAMETERS: E - error function, sum-of-squares for regression networks, cross-entropy for classification networks. Grad - gradient of E with respect to weights of network, array[WCount] -- ALGLIB -- Copyright 04.11.2007 by Bochkanov Sergey *************************************************************************/
public static void mlpgradnbatch( multilayerperceptron network, double[,] xy, int ssize, out double e, ref double[] grad)
/************************************************************************* Batch Hessian calculation using R-algorithm. Internal subroutine. -- ALGLIB -- Copyright 26.01.2008 by Bochkanov Sergey. Hessian calculation based on R-algorithm described in "Fast Exact Multiplication by the Hessian", B. A. Pearlmutter, Neural Computation, 1994. *************************************************************************/
public static void mlphessianbatch( multilayerperceptron network, double[,] xy, int ssize, out double e, ref double[] grad, ref double[,] h)
/************************************************************************* Batch Hessian calculation (natural error function) using R-algorithm. Internal subroutine. -- ALGLIB -- Copyright 26.01.2008 by Bochkanov Sergey. Hessian calculation based on R-algorithm described in "Fast Exact Multiplication by the Hessian", B. A. Pearlmutter, Neural Computation, 1994. *************************************************************************/
public static void mlphessiannbatch( multilayerperceptron network, double[,] xy, int ssize, out double e, ref double[] grad, ref double[,] h)
/************************************************************************* Tells whether network is SOFTMAX-normalized (i.e. classifier) or not. -- ALGLIB -- Copyright 04.11.2007 by Bochkanov Sergey *************************************************************************/
public static bool mlpissoftmax(multilayerperceptron network)
/************************************************************************* Procesing INPUT PARAMETERS: Network - neural network X - input vector, array[0..NIn-1]. OUTPUT PARAMETERS: Y - result. Regression estimate when solving regression task, vector of posterior probabilities for classification task. See also MLPProcessI -- ALGLIB -- Copyright 04.11.2007 by Bochkanov Sergey *************************************************************************/
public static void mlpprocess( multilayerperceptron network, double[] x, ref double[] y)
/************************************************************************* 'interactive' variant of MLPProcess for languages like Python which support constructs like "Y = MLPProcess(NN,X)" and interactive mode of the interpreter This function allocates new array on each call, so it is significantly slower than its 'non-interactive' counterpart, but it is more convenient when you call it from command line. -- ALGLIB -- Copyright 21.09.2010 by Bochkanov Sergey *************************************************************************/
public static void mlpprocessi( multilayerperceptron network, double[] x, out double[] y)
/************************************************************************* Returns information about initialized network: number of inputs, outputs, weights. -- ALGLIB -- Copyright 04.11.2007 by Bochkanov Sergey *************************************************************************/
public static void mlpproperties( multilayerperceptron network, out int nin, out int nout, out int wcount)
/************************************************************************* Randomization of neural network weights -- ALGLIB -- Copyright 06.11.2007 by Bochkanov Sergey *************************************************************************/
public static void mlprandomize(multilayerperceptron network)
/************************************************************************* Randomization of neural network weights and standartisator -- ALGLIB -- Copyright 10.03.2008 by Bochkanov Sergey *************************************************************************/
public static void mlprandomizefull(multilayerperceptron network)
/************************************************************************* Relative classification error on the test set INPUT PARAMETERS: Network - network XY - test set NPoints - test set size RESULT: percent of incorrectly classified cases. Works both for classifier networks and general purpose networks used as classifiers. -- ALGLIB -- Copyright 25.12.2008 by Bochkanov Sergey *************************************************************************/
public static double mlprelclserror( multilayerperceptron network, double[,] xy, int npoints)
/************************************************************************* RMS error on the test set INPUT PARAMETERS: Network - neural network XY - test set NPoints - test set size RESULT: root mean square error. Its meaning for regression task is obvious. As for classification task, RMS error means error when estimating posterior probabilities. -- ALGLIB -- Copyright 04.11.2007 by Bochkanov Sergey *************************************************************************/
public static double mlprmserror( multilayerperceptron network, double[,] xy, int npoints)
/************************************************************************* This function serializes data structure to string. Important properties of s_out: * it contains alphanumeric characters, dots, underscores, minus signs * these symbols are grouped into words, which are separated by spaces and Windows-style (CR+LF) newlines * although serializer uses spaces and CR+LF as separators, you can replace any separator character by arbitrary combination of spaces, tabs, Windows or Unix newlines. It allows flexible reformatting of the string in case you want to include it into text or XML file. But you should not insert separators into the middle of the "words" nor you should change case of letters. * s_out can be freely moved between 32-bit and 64-bit systems, little and big endian machines, and so on. You can serialize structure on 32-bit machine and unserialize it on 64-bit one (or vice versa), or serialize it on SPARC and unserialize on x86. You can also serialize it in C++ version of ALGLIB and unserialize in C# one, and vice versa. *************************************************************************/
public static void mlpserialize(multilayerperceptron obj, out string s_out)
/************************************************************************* This function sets offset/scaling coefficients for I-th input of the network. INPUT PARAMETERS: Network - network I - input index Mean - mean term Sigma - sigma term (if zero, will be replaced by 1.0) NTE: I-th input is passed through linear transformation IN[i] = (IN[i]-Mean)/Sigma before feeding to the network. This function sets Mean and Sigma. -- ALGLIB -- Copyright 25.03.2011 by Bochkanov Sergey *************************************************************************/
public static void mlpsetinputscaling( multilayerperceptron network, int i, double mean, double sigma)
/************************************************************************* This function modifies information about Ith neuron of Kth layer INPUT PARAMETERS: Network - network K - layer index I - neuron index (within layer) FKind - activation function type (used by MLPActivationFunction()) this value must be zero for input neurons (you can not set activation function for input neurons) Threshold - also called offset, bias this value must be zero for input neurons (you can not set threshold for input neurons) NOTES: 1. this function throws exception if layer or neuron with given index do not exists. 2. this function also throws exception when you try to set non-linear activation function for input neurons (any kind of network) or for output neurons of classifier network. 3. this function throws exception when you try to set non-zero threshold for input neurons (any kind of network). -- ALGLIB -- Copyright 25.03.2011 by Bochkanov Sergey *************************************************************************/
public static void mlpsetneuroninfo( multilayerperceptron network, int k, int i, int fkind, double threshold)
/************************************************************************* This function sets offset/scaling coefficients for I-th output of the network. INPUT PARAMETERS: Network - network I - input index Mean - mean term Sigma - sigma term (if zero, will be replaced by 1.0) OUTPUT PARAMETERS: NOTE: I-th output is passed through linear transformation OUT[i] = OUT[i]*Sigma+Mean before returning it to user. This function sets Sigma/Mean. In case we have SOFTMAX-normalized network, you can not set (Sigma,Mean) to anything other than(0.0,1.0) - this function will throw exception. -- ALGLIB -- Copyright 25.03.2011 by Bochkanov Sergey *************************************************************************/
public static void mlpsetoutputscaling( multilayerperceptron network, int i, double mean, double sigma)
/************************************************************************* This function modifies information about connection from I0-th neuron of K0-th layer to I1-th neuron of K1-th layer. INPUT PARAMETERS: Network - network K0 - layer index I0 - neuron index (within layer) K1 - layer index I1 - neuron index (within layer) W - connection weight (must be zero for non-existent connections) This function: 1. throws exception if layer or neuron with given index do not exists. 2. throws exception if you try to set non-zero weight for non-existent connection -- ALGLIB -- Copyright 25.03.2011 by Bochkanov Sergey *************************************************************************/
public static void mlpsetweight( multilayerperceptron network, int k0, int i0, int k1, int i1, double w)
/************************************************************************* This function unserializes data structure from string. *************************************************************************/
public static void mlpunserialize(string s_in, out multilayerperceptron obj)
mlpensemble
mlpeavgce
mlpeavgerror
mlpeavgrelerror
mlpebagginglbfgs
mlpebagginglm
mlpecreate0
mlpecreate1
mlpecreate2
mlpecreateb0
mlpecreateb1
mlpecreateb2
mlpecreatec0
mlpecreatec1
mlpecreatec2
mlpecreatefromnetwork
mlpecreater0
mlpecreater1
mlpecreater2
mlpeissoftmax
mlpeprocess
mlpeprocessi
mlpeproperties
mlperandomize
mlperelclserror
mlpermserror
mlpetraines
/************************************************************************* Neural networks ensemble *************************************************************************/
public class mlpensemble { }
/************************************************************************* Average cross-entropy (in bits per element) on the test set INPUT PARAMETERS: Ensemble- ensemble XY - test set NPoints - test set size RESULT: CrossEntropy/(NPoints*LN(2)). Zero if ensemble solves regression task. -- ALGLIB -- Copyright 17.02.2009 by Bochkanov Sergey *************************************************************************/
public static double mlpeavgce( mlpensemble ensemble, double[,] xy, int npoints)
/************************************************************************* Average error on the test set INPUT PARAMETERS: Ensemble- ensemble XY - test set NPoints - test set size RESULT: Its meaning for regression task is obvious. As for classification task it means average error when estimating posterior probabilities. -- ALGLIB -- Copyright 17.02.2009 by Bochkanov Sergey *************************************************************************/
public static double mlpeavgerror( mlpensemble ensemble, double[,] xy, int npoints)
/************************************************************************* Average relative error on the test set INPUT PARAMETERS: Ensemble- ensemble XY - test set NPoints - test set size RESULT: Its meaning for regression task is obvious. As for classification task it means average relative error when estimating posterior probabilities. -- ALGLIB -- Copyright 17.02.2009 by Bochkanov Sergey *************************************************************************/
public static double mlpeavgrelerror( mlpensemble ensemble, double[,] xy, int npoints)
/************************************************************************* Training neural networks ensemble using bootstrap aggregating (bagging). L-BFGS algorithm is used as base training method. INPUT PARAMETERS: Ensemble - model with initialized geometry XY - training set NPoints - training set size Decay - weight decay coefficient, >=0.001 Restarts - restarts, >0. WStep - stopping criterion, same as in MLPTrainLBFGS MaxIts - stopping criterion, same as in MLPTrainLBFGS OUTPUT PARAMETERS: Ensemble - trained model Info - return code: * -8, if both WStep=0 and MaxIts=0 * -2, if there is a point with class number outside of [0..NClasses-1]. * -1, if incorrect parameters was passed (NPoints<0, Restarts<1). * 2, if task has been solved. Rep - training report. OOBErrors - out-of-bag generalization error estimate -- ALGLIB -- Copyright 17.02.2009 by Bochkanov Sergey *************************************************************************/
public static void mlpebagginglbfgs( mlpensemble ensemble, double[,] xy, int npoints, double decay, int restarts, double wstep, int maxits, out int info, out mlpreport rep, out mlpcvreport ooberrors)
/************************************************************************* Training neural networks ensemble using bootstrap aggregating (bagging). Modified Levenberg-Marquardt algorithm is used as base training method. INPUT PARAMETERS: Ensemble - model with initialized geometry XY - training set NPoints - training set size Decay - weight decay coefficient, >=0.001 Restarts - restarts, >0. OUTPUT PARAMETERS: Ensemble - trained model Info - return code: * -2, if there is a point with class number outside of [0..NClasses-1]. * -1, if incorrect parameters was passed (NPoints<0, Restarts<1). * 2, if task has been solved. Rep - training report. OOBErrors - out-of-bag generalization error estimate -- ALGLIB -- Copyright 17.02.2009 by Bochkanov Sergey *************************************************************************/
public static void mlpebagginglm( mlpensemble ensemble, double[,] xy, int npoints, double decay, int restarts, out int info, out mlpreport rep, out mlpcvreport ooberrors)
/************************************************************************* Like MLPCreate0, but for ensembles. -- ALGLIB -- Copyright 18.02.2009 by Bochkanov Sergey *************************************************************************/
public static void mlpecreate0( int nin, int nout, int ensemblesize, out mlpensemble ensemble)
/************************************************************************* Like MLPCreate1, but for ensembles. -- ALGLIB -- Copyright 18.02.2009 by Bochkanov Sergey *************************************************************************/
public static void mlpecreate1( int nin, int nhid, int nout, int ensemblesize, out mlpensemble ensemble)
/************************************************************************* Like MLPCreate2, but for ensembles. -- ALGLIB -- Copyright 18.02.2009 by Bochkanov Sergey *************************************************************************/
public static void mlpecreate2( int nin, int nhid1, int nhid2, int nout, int ensemblesize, out mlpensemble ensemble)
/************************************************************************* Like MLPCreateB0, but for ensembles. -- ALGLIB -- Copyright 18.02.2009 by Bochkanov Sergey *************************************************************************/
public static void mlpecreateb0( int nin, int nout, double b, double d, int ensemblesize, out mlpensemble ensemble)
/************************************************************************* Like MLPCreateB1, but for ensembles. -- ALGLIB -- Copyright 18.02.2009 by Bochkanov Sergey *************************************************************************/
public static void mlpecreateb1( int nin, int nhid, int nout, double b, double d, int ensemblesize, out mlpensemble ensemble)
/************************************************************************* Like MLPCreateB2, but for ensembles. -- ALGLIB -- Copyright 18.02.2009 by Bochkanov Sergey *************************************************************************/
public static void mlpecreateb2( int nin, int nhid1, int nhid2, int nout, double b, double d, int ensemblesize, out mlpensemble ensemble)
/************************************************************************* Like MLPCreateC0, but for ensembles. -- ALGLIB -- Copyright 18.02.2009 by Bochkanov Sergey *************************************************************************/
public static void mlpecreatec0( int nin, int nout, int ensemblesize, out mlpensemble ensemble)
/************************************************************************* Like MLPCreateC1, but for ensembles. -- ALGLIB -- Copyright 18.02.2009 by Bochkanov Sergey *************************************************************************/
public static void mlpecreatec1( int nin, int nhid, int nout, int ensemblesize, out mlpensemble ensemble)
/************************************************************************* Like MLPCreateC2, but for ensembles. -- ALGLIB -- Copyright 18.02.2009 by Bochkanov Sergey *************************************************************************/
public static void mlpecreatec2( int nin, int nhid1, int nhid2, int nout, int ensemblesize, out mlpensemble ensemble)
/************************************************************************* Creates ensemble from network. Only network geometry is copied. -- ALGLIB -- Copyright 17.02.2009 by Bochkanov Sergey *************************************************************************/
public static void mlpecreatefromnetwork( multilayerperceptron network, int ensemblesize, out mlpensemble ensemble)
/************************************************************************* Like MLPCreateR0, but for ensembles. -- ALGLIB -- Copyright 18.02.2009 by Bochkanov Sergey *************************************************************************/
public static void mlpecreater0( int nin, int nout, double a, double b, int ensemblesize, out mlpensemble ensemble)
/************************************************************************* Like MLPCreateR1, but for ensembles. -- ALGLIB -- Copyright 18.02.2009 by Bochkanov Sergey *************************************************************************/
public static void mlpecreater1( int nin, int nhid, int nout, double a, double b, int ensemblesize, out mlpensemble ensemble)
/************************************************************************* Like MLPCreateR2, but for ensembles. -- ALGLIB -- Copyright 18.02.2009 by Bochkanov Sergey *************************************************************************/
public static void mlpecreater2( int nin, int nhid1, int nhid2, int nout, double a, double b, int ensemblesize, out mlpensemble ensemble)
/************************************************************************* Return normalization type (whether ensemble is SOFTMAX-normalized or not). -- ALGLIB -- Copyright 17.02.2009 by Bochkanov Sergey *************************************************************************/
public static bool mlpeissoftmax(mlpensemble ensemble)
/************************************************************************* Procesing INPUT PARAMETERS: Ensemble- neural networks ensemble X - input vector, array[0..NIn-1]. Y - (possibly) preallocated buffer; if size of Y is less than NOut, it will be reallocated. If it is large enough, it is NOT reallocated, so we can save some time on reallocation. OUTPUT PARAMETERS: Y - result. Regression estimate when solving regression task, vector of posterior probabilities for classification task. -- ALGLIB -- Copyright 17.02.2009 by Bochkanov Sergey *************************************************************************/
public static void mlpeprocess( mlpensemble ensemble, double[] x, ref double[] y)
/************************************************************************* 'interactive' variant of MLPEProcess for languages like Python which support constructs like "Y = MLPEProcess(LM,X)" and interactive mode of the interpreter This function allocates new array on each call, so it is significantly slower than its 'non-interactive' counterpart, but it is more convenient when you call it from command line. -- ALGLIB -- Copyright 17.02.2009 by Bochkanov Sergey *************************************************************************/
public static void mlpeprocessi( mlpensemble ensemble, double[] x, out double[] y)
/************************************************************************* Return ensemble properties (number of inputs and outputs). -- ALGLIB -- Copyright 17.02.2009 by Bochkanov Sergey *************************************************************************/
public static void mlpeproperties( mlpensemble ensemble, out int nin, out int nout)
/************************************************************************* Randomization of MLP ensemble -- ALGLIB -- Copyright 17.02.2009 by Bochkanov Sergey *************************************************************************/
public static void mlperandomize(mlpensemble ensemble)
/************************************************************************* Relative classification error on the test set INPUT PARAMETERS: Ensemble- ensemble XY - test set NPoints - test set size RESULT: percent of incorrectly classified cases. Works both for classifier betwork and for regression networks which are used as classifiers. -- ALGLIB -- Copyright 17.02.2009 by Bochkanov Sergey *************************************************************************/
public static double mlperelclserror( mlpensemble ensemble, double[,] xy, int npoints)
/************************************************************************* RMS error on the test set INPUT PARAMETERS: Ensemble- ensemble XY - test set NPoints - test set size RESULT: root mean square error. Its meaning for regression task is obvious. As for classification task RMS error means error when estimating posterior probabilities. -- ALGLIB -- Copyright 17.02.2009 by Bochkanov Sergey *************************************************************************/
public static double mlpermserror( mlpensemble ensemble, double[,] xy, int npoints)
/************************************************************************* Training neural networks ensemble using early stopping. INPUT PARAMETERS: Ensemble - model with initialized geometry XY - training set NPoints - training set size Decay - weight decay coefficient, >=0.001 Restarts - restarts, >0. OUTPUT PARAMETERS: Ensemble - trained model Info - return code: * -2, if there is a point with class number outside of [0..NClasses-1]. * -1, if incorrect parameters was passed (NPoints<0, Restarts<1). * 6, if task has been solved. Rep - training report. OOBErrors - out-of-bag generalization error estimate -- ALGLIB -- Copyright 10.03.2009 by Bochkanov Sergey *************************************************************************/
public static void mlpetraines( mlpensemble ensemble, double[,] xy, int npoints, double decay, int restarts, out int info, out mlpreport rep)
mlpcvreport
mlpreport
mlpkfoldcvlbfgs
mlpkfoldcvlm
mlptraines
mlptrainlbfgs
mlptrainlm
/************************************************************************* Cross-validation estimates of generalization error *************************************************************************/
public class mlpcvreport { public double relclserror; public double avgce; public double rmserror; public double avgerror; public double avgrelerror; }
/************************************************************************* Training report: * NGrad - number of gradient calculations * NHess - number of Hessian calculations * NCholesky - number of Cholesky decompositions *************************************************************************/
public class mlpreport { public int ngrad; public int nhess; public int ncholesky; }
/************************************************************************* Cross-validation estimate of generalization error. Base algorithm - L-BFGS. INPUT PARAMETERS: Network - neural network with initialized geometry. Network is not changed during cross-validation - it is used only as a representative of its architecture. XY - training set. SSize - training set size Decay - weight decay, same as in MLPTrainLBFGS Restarts - number of restarts, >0. restarts are counted for each partition separately, so total number of restarts will be Restarts*FoldsCount. WStep - stopping criterion, same as in MLPTrainLBFGS MaxIts - stopping criterion, same as in MLPTrainLBFGS FoldsCount - number of folds in k-fold cross-validation, 2<=FoldsCount<=SSize. recommended value: 10. OUTPUT PARAMETERS: Info - return code, same as in MLPTrainLBFGS Rep - report, same as in MLPTrainLM/MLPTrainLBFGS CVRep - generalization error estimates -- ALGLIB -- Copyright 09.12.2007 by Bochkanov Sergey *************************************************************************/
public static void mlpkfoldcvlbfgs( multilayerperceptron network, double[,] xy, int npoints, double decay, int restarts, double wstep, int maxits, int foldscount, out int info, out mlpreport rep, out mlpcvreport cvrep)
/************************************************************************* Cross-validation estimate of generalization error. Base algorithm - Levenberg-Marquardt. INPUT PARAMETERS: Network - neural network with initialized geometry. Network is not changed during cross-validation - it is used only as a representative of its architecture. XY - training set. SSize - training set size Decay - weight decay, same as in MLPTrainLBFGS Restarts - number of restarts, >0. restarts are counted for each partition separately, so total number of restarts will be Restarts*FoldsCount. FoldsCount - number of folds in k-fold cross-validation, 2<=FoldsCount<=SSize. recommended value: 10. OUTPUT PARAMETERS: Info - return code, same as in MLPTrainLBFGS Rep - report, same as in MLPTrainLM/MLPTrainLBFGS CVRep - generalization error estimates -- ALGLIB -- Copyright 09.12.2007 by Bochkanov Sergey *************************************************************************/
public static void mlpkfoldcvlm( multilayerperceptron network, double[,] xy, int npoints, double decay, int restarts, int foldscount, out int info, out mlpreport rep, out mlpcvreport cvrep)
/************************************************************************* Neural network training using early stopping (base algorithm - L-BFGS with regularization). INPUT PARAMETERS: Network - neural network with initialized geometry TrnXY - training set TrnSize - training set size ValXY - validation set ValSize - validation set size Decay - weight decay constant, >=0.001 Decay term 'Decay*||Weights||^2' is added to error function. If you don't know what Decay to choose, use 0.001. Restarts - number of restarts from random position, >0. If you don't know what Restarts to choose, use 2. OUTPUT PARAMETERS: Network - trained neural network. Info - return code: * -2, if there is a point with class number outside of [0..NOut-1]. * -1, if wrong parameters specified (NPoints<0, Restarts<1, ...). * 2, task has been solved, stopping criterion met - sufficiently small step size. Not expected (we use EARLY stopping) but possible and not an error. * 6, task has been solved, stopping criterion met - increasing of validation set error. Rep - training report NOTE: Algorithm stops if validation set error increases for a long enough or step size is small enought (there are task where validation set may decrease for eternity). In any case solution returned corresponds to the minimum of validation set error. -- ALGLIB -- Copyright 10.03.2009 by Bochkanov Sergey *************************************************************************/
public static void mlptraines( multilayerperceptron network, double[,] trnxy, int trnsize, double[,] valxy, int valsize, double decay, int restarts, out int info, out mlpreport rep)
/************************************************************************* Neural network training using L-BFGS algorithm with regularization. Subroutine trains neural network with restarts from random positions. Algorithm is well suited for problems of any dimensionality (memory requirements and step complexity are linear by weights number). INPUT PARAMETERS: Network - neural network with initialized geometry XY - training set NPoints - training set size Decay - weight decay constant, >=0.001 Decay term 'Decay*||Weights||^2' is added to error function. If you don't know what Decay to choose, use 0.001. Restarts - number of restarts from random position, >0. If you don't know what Restarts to choose, use 2. WStep - stopping criterion. Algorithm stops if step size is less than WStep. Recommended value - 0.01. Zero step size means stopping after MaxIts iterations. MaxIts - stopping criterion. Algorithm stops after MaxIts iterations (NOT gradient calculations). Zero MaxIts means stopping when step is sufficiently small. OUTPUT PARAMETERS: Network - trained neural network. Info - return code: * -8, if both WStep=0 and MaxIts=0 * -2, if there is a point with class number outside of [0..NOut-1]. * -1, if wrong parameters specified (NPoints<0, Restarts<1). * 2, if task has been solved. Rep - training report -- ALGLIB -- Copyright 09.12.2007 by Bochkanov Sergey *************************************************************************/
public static void mlptrainlbfgs( multilayerperceptron network, double[,] xy, int npoints, double decay, int restarts, double wstep, int maxits, out int info, out mlpreport rep)
/************************************************************************* Neural network training using modified Levenberg-Marquardt with exact Hessian calculation and regularization. Subroutine trains neural network with restarts from random positions. Algorithm is well suited for small and medium scale problems (hundreds of weights). INPUT PARAMETERS: Network - neural network with initialized geometry XY - training set NPoints - training set size Decay - weight decay constant, >=0.001 Decay term 'Decay*||Weights||^2' is added to error function. If you don't know what Decay to choose, use 0.001. Restarts - number of restarts from random position, >0. If you don't know what Restarts to choose, use 2. OUTPUT PARAMETERS: Network - trained neural network. Info - return code: * -9, if internal matrix inverse subroutine failed * -2, if there is a point with class number outside of [0..NOut-1]. * -1, if wrong parameters specified (NPoints<0, Restarts<1). * 2, if task has been solved. Rep - training report -- ALGLIB -- Copyright 10.03.2009 by Bochkanov Sergey *************************************************************************/
public static void mlptrainlm( multilayerperceptron network, double[,] xy, int npoints, double decay, int restarts, out int info, out mlpreport rep)
kdtree
kdtreebuild
kdtreebuildtagged
kdtreequeryaknn
kdtreequeryknn
kdtreequeryresultsdistances
kdtreequeryresultsdistancesi
kdtreequeryresultstags
kdtreequeryresultstagsi
kdtreequeryresultsx
kdtreequeryresultsxi
kdtreequeryresultsxy
kdtreequeryresultsxyi
kdtreequeryrnn
kdtreeserialize
kdtreeunserialize
nneighbor_d_1 Nearest neighbor search, KNN queries
nneighbor_d_2 Serialization of KD-trees
/************************************************************************* *************************************************************************/
public class kdtree { }
/************************************************************************* KD-tree creation This subroutine creates KD-tree from set of X-values and optional Y-values INPUT PARAMETERS XY - dataset, array[0..N-1,0..NX+NY-1]. one row corresponds to one point. first NX columns contain X-values, next NY (NY may be zero) columns may contain associated Y-values N - number of points, N>=1 NX - space dimension, NX>=1. NY - number of optional Y-values, NY>=0. NormType- norm type: * 0 denotes infinity-norm * 1 denotes 1-norm * 2 denotes 2-norm (Euclidean norm) OUTPUT PARAMETERS KDT - KD-tree NOTES 1. KD-tree creation have O(N*logN) complexity and O(N*(2*NX+NY)) memory requirements. 2. Although KD-trees may be used with any combination of N and NX, they are more efficient than brute-force search only when N >> 4^NX. So they are most useful in low-dimensional tasks (NX=2, NX=3). NX=1 is another inefficient case, because simple binary search (without additional structures) is much more efficient in such tasks than KD-trees. -- ALGLIB -- Copyright 28.02.2010 by Bochkanov Sergey *************************************************************************/
public static void kdtreebuild( double[,] xy, int nx, int ny, int normtype, out kdtree kdt) public static void kdtreebuild( double[,] xy, int n, int nx, int ny, int normtype, out kdtree kdt)

Examples:   [1]  [2]  

/************************************************************************* KD-tree creation This subroutine creates KD-tree from set of X-values, integer tags and optional Y-values INPUT PARAMETERS XY - dataset, array[0..N-1,0..NX+NY-1]. one row corresponds to one point. first NX columns contain X-values, next NY (NY may be zero) columns may contain associated Y-values Tags - tags, array[0..N-1], contains integer tags associated with points. N - number of points, N>=1 NX - space dimension, NX>=1. NY - number of optional Y-values, NY>=0. NormType- norm type: * 0 denotes infinity-norm * 1 denotes 1-norm * 2 denotes 2-norm (Euclidean norm) OUTPUT PARAMETERS KDT - KD-tree NOTES 1. KD-tree creation have O(N*logN) complexity and O(N*(2*NX+NY)) memory requirements. 2. Although KD-trees may be used with any combination of N and NX, they are more efficient than brute-force search only when N >> 4^NX. So they are most useful in low-dimensional tasks (NX=2, NX=3). NX=1 is another inefficient case, because simple binary search (without additional structures) is much more efficient in such tasks than KD-trees. -- ALGLIB -- Copyright 28.02.2010 by Bochkanov Sergey *************************************************************************/
public static void kdtreebuildtagged( double[,] xy, int[] tags, int nx, int ny, int normtype, out kdtree kdt) public static void kdtreebuildtagged( double[,] xy, int[] tags, int n, int nx, int ny, int normtype, out kdtree kdt)

Examples:   [1]  

/************************************************************************* K-NN query: approximate K nearest neighbors INPUT PARAMETERS KDT - KD-tree X - point, array[0..NX-1]. K - number of neighbors to return, K>=1 SelfMatch - whether self-matches are allowed: * if True, nearest neighbor may be the point itself (if it exists in original dataset) * if False, then only points with non-zero distance are returned * if not given, considered True Eps - approximation factor, Eps>=0. eps-approximate nearest neighbor is a neighbor whose distance from X is at most (1+eps) times distance of true nearest neighbor. RESULT number of actual neighbors found (either K or N, if K>N). NOTES significant performance gain may be achieved only when Eps is is on the order of magnitude of 1 or larger. This subroutine performs query and stores its result in the internal structures of the KD-tree. You can use following subroutines to obtain these results: * KDTreeQueryResultsX() to get X-values * KDTreeQueryResultsXY() to get X- and Y-values * KDTreeQueryResultsTags() to get tag values * KDTreeQueryResultsDistances() to get distances -- ALGLIB -- Copyright 28.02.2010 by Bochkanov Sergey *************************************************************************/
public static int kdtreequeryaknn( kdtree kdt, double[] x, int k, double eps) public static int kdtreequeryaknn( kdtree kdt, double[] x, int k, bool selfmatch, double eps)

Examples:   [1]  

/************************************************************************* K-NN query: K nearest neighbors INPUT PARAMETERS KDT - KD-tree X - point, array[0..NX-1]. K - number of neighbors to return, K>=1 SelfMatch - whether self-matches are allowed: * if True, nearest neighbor may be the point itself (if it exists in original dataset) * if False, then only points with non-zero distance are returned * if not given, considered True RESULT number of actual neighbors found (either K or N, if K>N). This subroutine performs query and stores its result in the internal structures of the KD-tree. You can use following subroutines to obtain these results: * KDTreeQueryResultsX() to get X-values * KDTreeQueryResultsXY() to get X- and Y-values * KDTreeQueryResultsTags() to get tag values * KDTreeQueryResultsDistances() to get distances -- ALGLIB -- Copyright 28.02.2010 by Bochkanov Sergey *************************************************************************/
public static int kdtreequeryknn(kdtree kdt, double[] x, int k) public static int kdtreequeryknn( kdtree kdt, double[] x, int k, bool selfmatch)

Examples:   [1]  

/************************************************************************* Distances from last query INPUT PARAMETERS KDT - KD-tree R - possibly pre-allocated buffer. If X is too small to store result, it is resized. If size(X) is enough to store result, it is left unchanged. OUTPUT PARAMETERS R - filled with distances (in corresponding norm) NOTES 1. points are ordered by distance from the query point (first = closest) 2. if XY is larger than required to store result, only leading part will be overwritten; trailing part will be left unchanged. So if on input XY = [[A,B],[C,D]], and result is [1,2], then on exit we will get XY = [[1,2],[C,D]]. This is done purposely to increase performance; if you want function to resize array according to result size, use function with same name and suffix 'I'. SEE ALSO * KDTreeQueryResultsX() X-values * KDTreeQueryResultsXY() X- and Y-values * KDTreeQueryResultsTags() tag values -- ALGLIB -- Copyright 28.02.2010 by Bochkanov Sergey *************************************************************************/
public static void kdtreequeryresultsdistances(kdtree kdt, ref double[] r)

Examples:   [1]  

/************************************************************************* Distances from last query; 'interactive' variant for languages like Python which support constructs like "R = KDTreeQueryResultsDistancesI(KDT)" and interactive mode of interpreter. This function allocates new array on each call, so it is significantly slower than its 'non-interactive' counterpart, but it is more convenient when you call it from command line. -- ALGLIB -- Copyright 28.02.2010 by Bochkanov Sergey *************************************************************************/
public static void kdtreequeryresultsdistancesi( kdtree kdt, out double[] r)
/************************************************************************* Tags from last query INPUT PARAMETERS KDT - KD-tree Tags - possibly pre-allocated buffer. If X is too small to store result, it is resized. If size(X) is enough to store result, it is left unchanged. OUTPUT PARAMETERS Tags - filled with tags associated with points, or, when no tags were supplied, with zeros NOTES 1. points are ordered by distance from the query point (first = closest) 2. if XY is larger than required to store result, only leading part will be overwritten; trailing part will be left unchanged. So if on input XY = [[A,B],[C,D]], and result is [1,2], then on exit we will get XY = [[1,2],[C,D]]. This is done purposely to increase performance; if you want function to resize array according to result size, use function with same name and suffix 'I'. SEE ALSO * KDTreeQueryResultsX() X-values * KDTreeQueryResultsXY() X- and Y-values * KDTreeQueryResultsDistances() distances -- ALGLIB -- Copyright 28.02.2010 by Bochkanov Sergey *************************************************************************/
public static void kdtreequeryresultstags(kdtree kdt, ref int[] tags)

Examples:   [1]  

/************************************************************************* Tags from last query; 'interactive' variant for languages like Python which support constructs like "Tags = KDTreeQueryResultsTagsI(KDT)" and interactive mode of interpreter. This function allocates new array on each call, so it is significantly slower than its 'non-interactive' counterpart, but it is more convenient when you call it from command line. -- ALGLIB -- Copyright 28.02.2010 by Bochkanov Sergey *************************************************************************/
public static void kdtreequeryresultstagsi(kdtree kdt, out int[] tags)
/************************************************************************* X-values from last query INPUT PARAMETERS KDT - KD-tree X - possibly pre-allocated buffer. If X is too small to store result, it is resized. If size(X) is enough to store result, it is left unchanged. OUTPUT PARAMETERS X - rows are filled with X-values NOTES 1. points are ordered by distance from the query point (first = closest) 2. if XY is larger than required to store result, only leading part will be overwritten; trailing part will be left unchanged. So if on input XY = [[A,B],[C,D]], and result is [1,2], then on exit we will get XY = [[1,2],[C,D]]. This is done purposely to increase performance; if you want function to resize array according to result size, use function with same name and suffix 'I'. SEE ALSO * KDTreeQueryResultsXY() X- and Y-values * KDTreeQueryResultsTags() tag values * KDTreeQueryResultsDistances() distances -- ALGLIB -- Copyright 28.02.2010 by Bochkanov Sergey *************************************************************************/
public static void kdtreequeryresultsx(kdtree kdt, ref double[,] x)

Examples:   [1]  

/************************************************************************* X-values from last query; 'interactive' variant for languages like Python which support constructs like "X = KDTreeQueryResultsXI(KDT)" and interactive mode of interpreter. This function allocates new array on each call, so it is significantly slower than its 'non-interactive' counterpart, but it is more convenient when you call it from command line. -- ALGLIB -- Copyright 28.02.2010 by Bochkanov Sergey *************************************************************************/
public static void kdtreequeryresultsxi(kdtree kdt, out double[,] x)
/************************************************************************* X- and Y-values from last query INPUT PARAMETERS KDT - KD-tree XY - possibly pre-allocated buffer. If XY is too small to store result, it is resized. If size(XY) is enough to store result, it is left unchanged. OUTPUT PARAMETERS XY - rows are filled with points: first NX columns with X-values, next NY columns - with Y-values. NOTES 1. points are ordered by distance from the query point (first = closest) 2. if XY is larger than required to store result, only leading part will be overwritten; trailing part will be left unchanged. So if on input XY = [[A,B],[C,D]], and result is [1,2], then on exit we will get XY = [[1,2],[C,D]]. This is done purposely to increase performance; if you want function to resize array according to result size, use function with same name and suffix 'I'. SEE ALSO * KDTreeQueryResultsX() X-values * KDTreeQueryResultsTags() tag values * KDTreeQueryResultsDistances() distances -- ALGLIB -- Copyright 28.02.2010 by Bochkanov Sergey *************************************************************************/
public static void kdtreequeryresultsxy(kdtree kdt, ref double[,] xy)

Examples:   [1]  

/************************************************************************* XY-values from last query; 'interactive' variant for languages like Python which support constructs like "XY = KDTreeQueryResultsXYI(KDT)" and interactive mode of interpreter. This function allocates new array on each call, so it is significantly slower than its 'non-interactive' counterpart, but it is more convenient when you call it from command line. -- ALGLIB -- Copyright 28.02.2010 by Bochkanov Sergey *************************************************************************/
public static void kdtreequeryresultsxyi(kdtree kdt, out double[,] xy)
/************************************************************************* R-NN query: all points within R-sphere centered at X INPUT PARAMETERS KDT - KD-tree X - point, array[0..NX-1]. R - radius of sphere (in corresponding norm), R>0 SelfMatch - whether self-matches are allowed: * if True, nearest neighbor may be the point itself (if it exists in original dataset) * if False, then only points with non-zero distance are returned * if not given, considered True RESULT number of neighbors found, >=0 This subroutine performs query and stores its result in the internal structures of the KD-tree. You can use following subroutines to obtain actual results: * KDTreeQueryResultsX() to get X-values * KDTreeQueryResultsXY() to get X- and Y-values * KDTreeQueryResultsTags() to get tag values * KDTreeQueryResultsDistances() to get distances -- ALGLIB -- Copyright 28.02.2010 by Bochkanov Sergey *************************************************************************/
public static int kdtreequeryrnn(kdtree kdt, double[] x, double r) public static int kdtreequeryrnn( kdtree kdt, double[] x, double r, bool selfmatch)

Examples:   [1]  

/************************************************************************* This function serializes data structure to string. Important properties of s_out: * it contains alphanumeric characters, dots, underscores, minus signs * these symbols are grouped into words, which are separated by spaces and Windows-style (CR+LF) newlines * although serializer uses spaces and CR+LF as separators, you can replace any separator character by arbitrary combination of spaces, tabs, Windows or Unix newlines. It allows flexible reformatting of the string in case you want to include it into text or XML file. But you should not insert separators into the middle of the "words" nor you should change case of letters. * s_out can be freely moved between 32-bit and 64-bit systems, little and big endian machines, and so on. You can serialize structure on 32-bit machine and unserialize it on 64-bit one (or vice versa), or serialize it on SPARC and unserialize on x86. You can also serialize it in C++ version of ALGLIB and unserialize in C# one, and vice versa. *************************************************************************/
public static void kdtreeserialize(kdtree obj, out string s_out)
/************************************************************************* This function unserializes data structure from string. *************************************************************************/
public static void kdtreeunserialize(string s_in, out kdtree obj)

public static int Main(string[] args)
{
    double[,] a = new double[,]{{0,0},{0,1},{1,0},{1,1}};
    int nx = 2;
    int ny = 0;
    int normtype = 2;
    alglib.kdtree kdt;
    double[] x;
    double[,] r = new double[0,0];
    int k;
    alglib.kdtreebuild(a, nx, ny, normtype, out kdt);
    x = new double[]{-1,0};
    k = alglib.kdtreequeryknn(kdt, x, 1);
    System.Console.WriteLine("{0}", k); // EXPECTED: 1
    alglib.kdtreequeryresultsx(kdt, ref r);
    System.Console.WriteLine("{0}", alglib.ap.format(r,1)); // EXPECTED: [[0,0]]
    System.Console.ReadLine();
    return 0;
}



public static int Main(string[] args)
{
    double[,] a = new double[,]{{0,0},{0,1},{1,0},{1,1}};
    int nx = 2;
    int ny = 0;
    int normtype = 2;
    alglib.kdtree kdt0;
    alglib.kdtree kdt1;
    string s;
    double[] x;
    double[,] r0 = new double[0,0];
    double[,] r1 = new double[0,0];

    //
    // Build tree and serialize it
    //
    alglib.kdtreebuild(a, nx, ny, normtype, out kdt0);
    alglib.kdtreeserialize(kdt0, out s);
    alglib.kdtreeunserialize(s, out kdt1);

    //
    // Compare results from KNN queries
    //
    x = new double[]{-1,0};
    alglib.kdtreequeryknn(kdt0, x, 1);
    alglib.kdtreequeryresultsx(kdt0, ref r0);
    alglib.kdtreequeryknn(kdt1, x, 1);
    alglib.kdtreequeryresultsx(kdt1, ref r1);
    System.Console.WriteLine("{0}", alglib.ap.format(r0,1)); // EXPECTED: [[0,0]]
    System.Console.WriteLine("{0}", alglib.ap.format(r1,1)); // EXPECTED: [[0,0]]
    System.Console.ReadLine();
    return 0;
}


nleqreport
nleqstate
nleqcreatelm
nleqrestartfrom
nleqresults
nleqresultsbuf
nleqsetcond
nleqsetstpmax
nleqsetxrep
nleqsolve
/************************************************************************* *************************************************************************/
public class nleqreport { public int iterationscount; public int nfunc; public int njac; public int terminationtype; }
/************************************************************************* *************************************************************************/
public class nleqstate { }
/************************************************************************* LEVENBERG-MARQUARDT-LIKE NONLINEAR SOLVER DESCRIPTION: This algorithm solves system of nonlinear equations F[0](x[0], ..., x[n-1]) = 0 F[1](x[0], ..., x[n-1]) = 0 ... F[M-1](x[0], ..., x[n-1]) = 0 with M/N do not necessarily coincide. Algorithm converges quadratically under following conditions: * the solution set XS is nonempty * for some xs in XS there exist such neighbourhood N(xs) that: * vector function F(x) and its Jacobian J(x) are continuously differentiable on N * ||F(x)|| provides local error bound on N, i.e. there exists such c1, that ||F(x)||>c1*distance(x,XS) Note that these conditions are much more weaker than usual non-singularity conditions. For example, algorithm will converge for any affine function F (whether its Jacobian singular or not). REQUIREMENTS: Algorithm will request following information during its operation: * function vector F[] and Jacobian matrix at given point X * value of merit function f(x)=F[0]^2(x)+...+F[M-1]^2(x) at given point X USAGE: 1. User initializes algorithm state with NLEQCreateLM() call 2. User tunes solver parameters with NLEQSetCond(), NLEQSetStpMax() and other functions 3. User calls NLEQSolve() function which takes algorithm state and pointers (delegates, etc.) to callback functions which calculate merit function value and Jacobian. 4. User calls NLEQResults() to get solution 5. Optionally, user may call NLEQRestartFrom() to solve another problem with same parameters (N/M) but another starting point and/or another function vector. NLEQRestartFrom() allows to reuse already initialized structure. INPUT PARAMETERS: N - space dimension, N>1: * if provided, only leading N elements of X are used * if not provided, determined automatically from size of X M - system size X - starting point OUTPUT PARAMETERS: State - structure which stores algorithm state NOTES: 1. you may tune stopping conditions with NLEQSetCond() function 2. if target function contains exp() or other fast growing functions, and optimization algorithm makes too large steps which leads to overflow, use NLEQSetStpMax() function to bound algorithm's steps. 3. this algorithm is a slightly modified implementation of the method described in 'Levenberg-Marquardt method for constrained nonlinear equations with strong local convergence properties' by Christian Kanzow Nobuo Yamashita and Masao Fukushima and further developed in 'On the convergence of a New Levenberg-Marquardt Method' by Jin-yan Fan and Ya-Xiang Yuan. -- ALGLIB -- Copyright 20.08.2009 by Bochkanov Sergey *************************************************************************/
public static void nleqcreatelm(int m, double[] x, out nleqstate state) public static void nleqcreatelm( int n, int m, double[] x, out nleqstate state)
/************************************************************************* This subroutine restarts CG algorithm from new point. All optimization parameters are left unchanged. This function allows to solve multiple optimization problems (which must have same number of dimensions) without object reallocation penalty. INPUT PARAMETERS: State - structure used for reverse communication previously allocated with MinCGCreate call. X - new starting point. BndL - new lower bounds BndU - new upper bounds -- ALGLIB -- Copyright 30.07.2010 by Bochkanov Sergey *************************************************************************/
public static void nleqrestartfrom(nleqstate state, double[] x)
/************************************************************************* NLEQ solver results INPUT PARAMETERS: State - algorithm state. OUTPUT PARAMETERS: X - array[0..N-1], solution Rep - optimization report: * Rep.TerminationType completetion code: * -4 ERROR: algorithm has converged to the stationary point Xf which is local minimum of f=F[0]^2+...+F[m-1]^2, but is not solution of nonlinear system. * 1 sqrt(f)<=EpsF. * 5 MaxIts steps was taken * 7 stopping conditions are too stringent, further improvement is impossible * Rep.IterationsCount contains iterations count * NFEV countains number of function calculations * ActiveConstraints contains number of active constraints -- ALGLIB -- Copyright 20.08.2009 by Bochkanov Sergey *************************************************************************/
public static void nleqresults( nleqstate state, out double[] x, out nleqreport rep)
/************************************************************************* NLEQ solver results Buffered implementation of NLEQResults(), which uses pre-allocated buffer to store X[]. If buffer size is too small, it resizes buffer. It is intended to be used in the inner cycles of performance critical algorithms where array reallocation penalty is too large to be ignored. -- ALGLIB -- Copyright 20.08.2009 by Bochkanov Sergey *************************************************************************/
public static void nleqresultsbuf( nleqstate state, ref double[] x, nleqreport rep)
/************************************************************************* This function sets stopping conditions for the nonlinear solver INPUT PARAMETERS: State - structure which stores algorithm state EpsF - >=0 The subroutine finishes its work if on k+1-th iteration the condition ||F||<=EpsF is satisfied MaxIts - maximum number of iterations. If MaxIts=0, the number of iterations is unlimited. Passing EpsF=0 and MaxIts=0 simultaneously will lead to automatic stopping criterion selection (small EpsF). NOTES: -- ALGLIB -- Copyright 20.08.2010 by Bochkanov Sergey *************************************************************************/
public static void nleqsetcond(nleqstate state, double epsf, int maxits)
/************************************************************************* This function sets maximum step length INPUT PARAMETERS: State - structure which stores algorithm state StpMax - maximum step length, >=0. Set StpMax to 0.0, if you don't want to limit step length. Use this subroutine when target function contains exp() or other fast growing functions, and algorithm makes too large steps which lead to overflow. This function allows us to reject steps that are too large (and therefore expose us to the possible overflow) without actually calculating function value at the x+stp*d. -- ALGLIB -- Copyright 20.08.2010 by Bochkanov Sergey *************************************************************************/
public static void nleqsetstpmax(nleqstate state, double stpmax)
/************************************************************************* This function turns on/off reporting. INPUT PARAMETERS: State - structure which stores algorithm state NeedXRep- whether iteration reports are needed or not If NeedXRep is True, algorithm will call rep() callback function if it is provided to NLEQSolve(). -- ALGLIB -- Copyright 20.08.2010 by Bochkanov Sergey *************************************************************************/
public static void nleqsetxrep(nleqstate state, bool needxrep)
/************************************************************************* This family of functions is used to launcn iterations of nonlinear solver These functions accept following parameters: state - algorithm state func - callback which calculates function (or merit function) value func at given point x jac - callback which calculates function vector fi[] and Jacobian jac at given point x rep - optional callback which is called after each iteration can be NULL ptr - optional pointer which is passed to func/grad/hess/jac/rep can be NULL -- ALGLIB -- Copyright 20.03.2009 by Bochkanov Sergey *************************************************************************/
public static void nleqsolve(nleqstate state, ndimensional_func func, ndimensional_jac jac, ndimensional_rep rep, object obj)
errorfunction
errorfunctionc
inverf
invnormaldistribution
normaldistribution
/************************************************************************* Error function The integral is x - 2 | | 2 erf(x) = -------- | exp( - t ) dt. sqrt(pi) | | - 0 For 0 <= |x| < 1, erf(x) = x * P4(x**2)/Q5(x**2); otherwise erf(x) = 1 - erfc(x). ACCURACY: Relative error: arithmetic domain # trials peak rms IEEE 0,1 30000 3.7e-16 1.0e-16 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1988, 1992, 2000 by Stephen L. Moshier *************************************************************************/
public static double errorfunction(double x)
/************************************************************************* Complementary error function 1 - erf(x) = inf. - 2 | | 2 erfc(x) = -------- | exp( - t ) dt sqrt(pi) | | - x For small x, erfc(x) = 1 - erf(x); otherwise rational approximations are computed. ACCURACY: Relative error: arithmetic domain # trials peak rms IEEE 0,26.6417 30000 5.7e-14 1.5e-14 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1988, 1992, 2000 by Stephen L. Moshier *************************************************************************/
public static double errorfunctionc(double x)
/************************************************************************* Inverse of the error function Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1988, 1992, 2000 by Stephen L. Moshier *************************************************************************/
public static double inverf(double e)
/************************************************************************* Inverse of Normal distribution function Returns the argument, x, for which the area under the Gaussian probability density function (integrated from minus infinity to x) is equal to y. For small arguments 0 < y < exp(-2), the program computes z = sqrt( -2.0 * log(y) ); then the approximation is x = z - log(z)/z - (1/z) P(1/z) / Q(1/z). There are two rational functions P/Q, one for 0 < y < exp(-32) and the other for y up to exp(-2). For larger arguments, w = y - 0.5, and x/sqrt(2pi) = w + w**3 R(w**2)/S(w**2)). ACCURACY: Relative error: arithmetic domain # trials peak rms IEEE 0.125, 1 20000 7.2e-16 1.3e-16 IEEE 3e-308, 0.135 50000 4.6e-16 9.8e-17 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1988, 1992, 2000 by Stephen L. Moshier *************************************************************************/
public static double invnormaldistribution(double y0)
/************************************************************************* Normal distribution function Returns the area under the Gaussian probability density function, integrated from minus infinity to x: x - 1 | | 2 ndtr(x) = --------- | exp( - t /2 ) dt sqrt(2pi) | | - -inf. = ( 1 + erf(z) ) / 2 = erfc(z) / 2 where z = x/sqrt(2). Computation is via the functions erf and erfc. ACCURACY: Relative error: arithmetic domain # trials peak rms IEEE -13,0 30000 3.4e-14 6.7e-15 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1988, 1992, 2000 by Stephen L. Moshier *************************************************************************/
public static double normaldistribution(double x)
odesolverreport
odesolverstate
odesolverresults
odesolverrkck
odesolversolve
odesolver_d1 Solving y'=-y with ODE solver
/************************************************************************* *************************************************************************/
public class odesolverreport { public int nfev; public int terminationtype; }
/************************************************************************* *************************************************************************/
public class odesolverstate { }
/************************************************************************* ODE solver results Called after OdeSolverIteration returned False. INPUT PARAMETERS: State - algorithm state (used by OdeSolverIteration). OUTPUT PARAMETERS: M - number of tabulated values, M>=1 XTbl - array[0..M-1], values of X YTbl - array[0..M-1,0..N-1], values of Y in X[i] Rep - solver report: * Rep.TerminationType completetion code: * -2 X is not ordered by ascending/descending or there are non-distinct X[], i.e. X[i]=X[i+1] * -1 incorrect parameters were specified * 1 task has been solved * Rep.NFEV contains number of function calculations -- ALGLIB -- Copyright 01.09.2009 by Bochkanov Sergey *************************************************************************/
public static void odesolverresults( odesolverstate state, out int m, out double[] xtbl, out double[,] ytbl, out odesolverreport rep)

Examples:   [1]  

/************************************************************************* Cash-Karp adaptive ODE solver. This subroutine solves ODE Y'=f(Y,x) with initial conditions Y(xs)=Ys (here Y may be single variable or vector of N variables). INPUT PARAMETERS: Y - initial conditions, array[0..N-1]. contains values of Y[] at X[0] N - system size X - points at which Y should be tabulated, array[0..M-1] integrations starts at X[0], ends at X[M-1], intermediate values at X[i] are returned too. SHOULD BE ORDERED BY ASCENDING OR BY DESCENDING!!!! M - number of intermediate points + first point + last point: * M>2 means that you need both Y(X[M-1]) and M-2 values at intermediate points * M=2 means that you want just to integrate from X[0] to X[1] and don't interested in intermediate values. * M=1 means that you don't want to integrate :) it is degenerate case, but it will be handled correctly. * M<1 means error Eps - tolerance (absolute/relative error on each step will be less than Eps). When passing: * Eps>0, it means desired ABSOLUTE error * Eps<0, it means desired RELATIVE error. Relative errors are calculated with respect to maximum values of Y seen so far. Be careful to use this criterion when starting from Y[] that are close to zero. H - initial step lenth, it will be adjusted automatically after the first step. If H=0, step will be selected automatically (usualy it will be equal to 0.001 of min(x[i]-x[j])). OUTPUT PARAMETERS State - structure which stores algorithm state between subsequent calls of OdeSolverIteration. Used for reverse communication. This structure should be passed to the OdeSolverIteration subroutine. SEE ALSO AutoGKSmoothW, AutoGKSingular, AutoGKIteration, AutoGKResults. -- ALGLIB -- Copyright 01.09.2009 by Bochkanov Sergey *************************************************************************/
public static void odesolverrkck( double[] y, double[] x, double eps, double h, out odesolverstate state) public static void odesolverrkck( double[] y, int n, double[] x, int m, double eps, double h, out odesolverstate state)

Examples:   [1]  

/************************************************************************* This function is used to launcn iterations of ODE solver It accepts following parameters: diff - callback which calculates dy/dx for given y and x ptr - optional pointer which is passed to diff; can be NULL -- ALGLIB -- Copyright 01.09.2009 by Bochkanov Sergey *************************************************************************/
public static void odesolversolve(odesolverstate state, ndimensional_ode_rp diff, object obj) { if( diff==null ) throw new alglibexception("ALGLIB: error in 'odesolversolve()' (diff is null)"); while( alglib.odesolveriteration(state) ) { if( state.needdy ) { diff(state.innerobj.y, state.innerobj.x, state.innerobj.dy, obj); continue; } throw new alglibexception("ALGLIB: unexpected error in 'odesolversolve'"); } }

Examples:   [1]  

public static void ode_function_1_diff(double[] y, double x, double[] dy, object obj)
{
    // this callback calculates f(y[],x)=-y[0]
    dy[0] = -y[0];
}
public static int Main(string[] args)
{
    double[] y = new double[]{1};
    double[] x = new double[]{0,1,2,3};
    double eps = 0.00001;
    double h = 0;
    alglib.odesolverstate s;
    int m;
    double[] xtbl;
    double[,] ytbl;
    alglib.odesolverreport rep;
    alglib.odesolverrkck(y, x, eps, h, out s);
    alglib.odesolversolve(s, ode_function_1_diff, null);
    alglib.odesolverresults(s, out m, out xtbl, out ytbl, out rep);
    System.Console.WriteLine("{0}", m); // EXPECTED: 4
    System.Console.WriteLine("{0}", alglib.ap.format(xtbl,2)); // EXPECTED: [0, 1, 2, 3]
    System.Console.WriteLine("{0}", alglib.ap.format(ytbl,2)); // EXPECTED: [[1], [0.367], [0.135], [0.050]]
    System.Console.ReadLine();
    return 0;
}


cmatrixlq
cmatrixlqunpackl
cmatrixlqunpackq
cmatrixqr
cmatrixqrunpackq
cmatrixqrunpackr
hmatrixtd
hmatrixtdunpackq
rmatrixbd
rmatrixbdmultiplybyp
rmatrixbdmultiplybyq
rmatrixbdunpackdiagonals
rmatrixbdunpackpt
rmatrixbdunpackq
rmatrixhessenberg
rmatrixhessenbergunpackh
rmatrixhessenbergunpackq
rmatrixlq
rmatrixlqunpackl
rmatrixlqunpackq
rmatrixqr
rmatrixqrunpackq
rmatrixqrunpackr
smatrixtd
smatrixtdunpackq
/************************************************************************* LQ decomposition of a rectangular complex matrix of size MxN Input parameters: A - matrix A whose indexes range within [0..M-1, 0..N-1] M - number of rows in matrix A. N - number of columns in matrix A. Output parameters: A - matrices Q and L in compact form Tau - array of scalar factors which are used to form matrix Q. Array whose indexes range within [0.. Min(M,N)-1] Matrix A is represented as A = LQ, where Q is an orthogonal matrix of size MxM, L - lower triangular (or lower trapezoid) matrix of size MxN. -- LAPACK routine (version 3.0) -- Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., Courant Institute, Argonne National Lab, and Rice University September 30, 1994 *************************************************************************/
public static void cmatrixlq( ref complex[,] a, int m, int n, out complex[] tau)
/************************************************************************* Unpacking of matrix L from the LQ decomposition of a matrix A Input parameters: A - matrices Q and L in compact form. Output of CMatrixLQ subroutine. M - number of rows in given matrix A. M>=0. N - number of columns in given matrix A. N>=0. Output parameters: L - matrix L, array[0..M-1, 0..N-1]. -- ALGLIB routine -- 17.02.2010 Bochkanov Sergey *************************************************************************/
public static void cmatrixlqunpackl( complex[,] a, int m, int n, out complex[,] l)
/************************************************************************* Partial unpacking of matrix Q from LQ decomposition of a complex matrix A. Input parameters: A - matrices Q and R in compact form. Output of CMatrixLQ subroutine . M - number of rows in matrix A. M>=0. N - number of columns in matrix A. N>=0. Tau - scalar factors which are used to form Q. Output of CMatrixLQ subroutine . QRows - required number of rows in matrix Q. N>=QColumns>=0. Output parameters: Q - first QRows rows of matrix Q. Array whose index ranges within [0..QRows-1, 0..N-1]. If QRows=0, array isn't changed. -- ALGLIB routine -- 17.02.2010 Bochkanov Sergey *************************************************************************/
public static void cmatrixlqunpackq( complex[,] a, int m, int n, complex[] tau, int qrows, out complex[,] q)
/************************************************************************* QR decomposition of a rectangular complex matrix of size MxN Input parameters: A - matrix A whose indexes range within [0..M-1, 0..N-1] M - number of rows in matrix A. N - number of columns in matrix A. Output parameters: A - matrices Q and R in compact form Tau - array of scalar factors which are used to form matrix Q. Array whose indexes range within [0.. Min(M,N)-1] Matrix A is represented as A = QR, where Q is an orthogonal matrix of size MxM, R - upper triangular (or upper trapezoid) matrix of size MxN. -- LAPACK routine (version 3.0) -- Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., Courant Institute, Argonne National Lab, and Rice University September 30, 1994 *************************************************************************/
public static void cmatrixqr( ref complex[,] a, int m, int n, out complex[] tau)
/************************************************************************* Partial unpacking of matrix Q from QR decomposition of a complex matrix A. Input parameters: A - matrices Q and R in compact form. Output of CMatrixQR subroutine . M - number of rows in matrix A. M>=0. N - number of columns in matrix A. N>=0. Tau - scalar factors which are used to form Q. Output of CMatrixQR subroutine . QColumns - required number of columns in matrix Q. M>=QColumns>=0. Output parameters: Q - first QColumns columns of matrix Q. Array whose index ranges within [0..M-1, 0..QColumns-1]. If QColumns=0, array isn't changed. -- ALGLIB routine -- 17.02.2010 Bochkanov Sergey *************************************************************************/
public static void cmatrixqrunpackq( complex[,] a, int m, int n, complex[] tau, int qcolumns, out complex[,] q)
/************************************************************************* Unpacking of matrix R from the QR decomposition of a matrix A Input parameters: A - matrices Q and R in compact form. Output of CMatrixQR subroutine. M - number of rows in given matrix A. M>=0. N - number of columns in given matrix A. N>=0. Output parameters: R - matrix R, array[0..M-1, 0..N-1]. -- ALGLIB routine -- 17.02.2010 Bochkanov Sergey *************************************************************************/
public static void cmatrixqrunpackr( complex[,] a, int m, int n, out complex[,] r)
/************************************************************************* Reduction of a Hermitian matrix which is given by its higher or lower triangular part to a real tridiagonal matrix using unitary similarity transformation: Q'*A*Q = T. Input parameters: A - matrix to be transformed array with elements [0..N-1, 0..N-1]. N - size of matrix A. IsUpper - storage format. If IsUpper = True, then matrix A is given by its upper triangle, and the lower triangle is not used and not modified by the algorithm, and vice versa if IsUpper = False. Output parameters: A - matrices T and Q in compact form (see lower) Tau - array of factors which are forming matrices H(i) array with elements [0..N-2]. D - main diagonal of real symmetric matrix T. array with elements [0..N-1]. E - secondary diagonal of real symmetric matrix T. array with elements [0..N-2]. If IsUpper=True, the matrix Q is represented as a product of elementary reflectors Q = H(n-2) . . . H(2) H(0). Each H(i) has the form H(i) = I - tau * v * v' where tau is a complex scalar, and v is a complex vector with v(i+1:n-1) = 0, v(i) = 1, v(0:i-1) is stored on exit in A(0:i-1,i+1), and tau in TAU(i). If IsUpper=False, the matrix Q is represented as a product of elementary reflectors Q = H(0) H(2) . . . H(n-2). Each H(i) has the form H(i) = I - tau * v * v' where tau is a complex scalar, and v is a complex vector with v(0:i) = 0, v(i+1) = 1, v(i+2:n-1) is stored on exit in A(i+2:n-1,i), and tau in TAU(i). The contents of A on exit are illustrated by the following examples with n = 5: if UPLO = 'U': if UPLO = 'L': ( d e v1 v2 v3 ) ( d ) ( d e v2 v3 ) ( e d ) ( d e v3 ) ( v0 e d ) ( d e ) ( v0 v1 e d ) ( d ) ( v0 v1 v2 e d ) where d and e denote diagonal and off-diagonal elements of T, and vi denotes an element of the vector defining H(i). -- LAPACK routine (version 3.0) -- Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., Courant Institute, Argonne National Lab, and Rice University October 31, 1992 *************************************************************************/
public static void hmatrixtd( ref complex[,] a, int n, bool isupper, out complex[] tau, out double[] d, out double[] e)
/************************************************************************* Unpacking matrix Q which reduces a Hermitian matrix to a real tridiagonal form. Input parameters: A - the result of a HMatrixTD subroutine N - size of matrix A. IsUpper - storage format (a parameter of HMatrixTD subroutine) Tau - the result of a HMatrixTD subroutine Output parameters: Q - transformation matrix. array with elements [0..N-1, 0..N-1]. -- ALGLIB -- Copyright 2005-2010 by Bochkanov Sergey *************************************************************************/
public static void hmatrixtdunpackq( complex[,] a, int n, bool isupper, complex[] tau, out complex[,] q)
/************************************************************************* Reduction of a rectangular matrix to bidiagonal form The algorithm reduces the rectangular matrix A to bidiagonal form by orthogonal transformations P and Q: A = Q*B*P. Input parameters: A - source matrix. array[0..M-1, 0..N-1] M - number of rows in matrix A. N - number of columns in matrix A. Output parameters: A - matrices Q, B, P in compact form (see below). TauQ - scalar factors which are used to form matrix Q. TauP - scalar factors which are used to form matrix P. The main diagonal and one of the secondary diagonals of matrix A are replaced with bidiagonal matrix B. Other elements contain elementary reflections which form MxM matrix Q and NxN matrix P, respectively. If M>=N, B is the upper bidiagonal MxN matrix and is stored in the corresponding elements of matrix A. Matrix Q is represented as a product of elementary reflections Q = H(0)*H(1)*...*H(n-1), where H(i) = 1-tau*v*v'. Here tau is a scalar which is stored in TauQ[i], and vector v has the following structure: v(0:i-1)=0, v(i)=1, v(i+1:m-1) is stored in elements A(i+1:m-1,i). Matrix P is as follows: P = G(0)*G(1)*...*G(n-2), where G(i) = 1 - tau*u*u'. Tau is stored in TauP[i], u(0:i)=0, u(i+1)=1, u(i+2:n-1) is stored in elements A(i,i+2:n-1). If M<N, B is the lower bidiagonal MxN matrix and is stored in the corresponding elements of matrix A. Q = H(0)*H(1)*...*H(m-2), where H(i) = 1 - tau*v*v', tau is stored in TauQ, v(0:i)=0, v(i+1)=1, v(i+2:m-1) is stored in elements A(i+2:m-1,i). P = G(0)*G(1)*...*G(m-1), G(i) = 1-tau*u*u', tau is stored in TauP, u(0:i-1)=0, u(i)=1, u(i+1:n-1) is stored in A(i,i+1:n-1). EXAMPLE: m=6, n=5 (m > n): m=5, n=6 (m < n): ( d e u1 u1 u1 ) ( d u1 u1 u1 u1 u1 ) ( v1 d e u2 u2 ) ( e d u2 u2 u2 u2 ) ( v1 v2 d e u3 ) ( v1 e d u3 u3 u3 ) ( v1 v2 v3 d e ) ( v1 v2 e d u4 u4 ) ( v1 v2 v3 v4 d ) ( v1 v2 v3 e d u5 ) ( v1 v2 v3 v4 v5 ) Here vi and ui are vectors which form H(i) and G(i), and d and e - are the diagonal and off-diagonal elements of matrix B. -- LAPACK routine (version 3.0) -- Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., Courant Institute, Argonne National Lab, and Rice University September 30, 1994. Sergey Bochkanov, ALGLIB project, translation from FORTRAN to pseudocode, 2007-2010. *************************************************************************/
public static void rmatrixbd( ref double[,] a, int m, int n, out double[] tauq, out double[] taup)
/************************************************************************* Multiplication by matrix P which reduces matrix A to bidiagonal form. The algorithm allows pre- or post-multiply by P or P'. Input parameters: QP - matrices Q and P in compact form. Output of RMatrixBD subroutine. M - number of rows in matrix A. N - number of columns in matrix A. TAUP - scalar factors which are used to form P. Output of RMatrixBD subroutine. Z - multiplied matrix. Array whose indexes range within [0..ZRows-1,0..ZColumns-1]. ZRows - number of rows in matrix Z. If FromTheRight=False, ZRows=N, otherwise ZRows can be arbitrary. ZColumns - number of columns in matrix Z. If FromTheRight=True, ZColumns=N, otherwise ZColumns can be arbitrary. FromTheRight - pre- or post-multiply. DoTranspose - multiply by P or P'. Output parameters: Z - product of Z and P. Array whose indexes range within [0..ZRows-1,0..ZColumns-1]. If ZRows=0 or ZColumns=0, the array is not modified. -- ALGLIB -- 2005-2010 Bochkanov Sergey *************************************************************************/
public static void rmatrixbdmultiplybyp( double[,] qp, int m, int n, double[] taup, ref double[,] z, int zrows, int zcolumns, bool fromtheright, bool dotranspose)
/************************************************************************* Multiplication by matrix Q which reduces matrix A to bidiagonal form. The algorithm allows pre- or post-multiply by Q or Q'. Input parameters: QP - matrices Q and P in compact form. Output of ToBidiagonal subroutine. M - number of rows in matrix A. N - number of columns in matrix A. TAUQ - scalar factors which are used to form Q. Output of ToBidiagonal subroutine. Z - multiplied matrix. array[0..ZRows-1,0..ZColumns-1] ZRows - number of rows in matrix Z. If FromTheRight=False, ZRows=M, otherwise ZRows can be arbitrary. ZColumns - number of columns in matrix Z. If FromTheRight=True, ZColumns=M, otherwise ZColumns can be arbitrary. FromTheRight - pre- or post-multiply. DoTranspose - multiply by Q or Q'. Output parameters: Z - product of Z and Q. Array[0..ZRows-1,0..ZColumns-1] If ZRows=0 or ZColumns=0, the array is not modified. -- ALGLIB -- 2005-2010 Bochkanov Sergey *************************************************************************/
public static void rmatrixbdmultiplybyq( double[,] qp, int m, int n, double[] tauq, ref double[,] z, int zrows, int zcolumns, bool fromtheright, bool dotranspose)
/************************************************************************* Unpacking of the main and secondary diagonals of bidiagonal decomposition of matrix A. Input parameters: B - output of RMatrixBD subroutine. M - number of rows in matrix B. N - number of columns in matrix B. Output parameters: IsUpper - True, if the matrix is upper bidiagonal. otherwise IsUpper is False. D - the main diagonal. Array whose index ranges within [0..Min(M,N)-1]. E - the secondary diagonal (upper or lower, depending on the value of IsUpper). Array index ranges within [0..Min(M,N)-1], the last element is not used. -- ALGLIB -- 2005-2010 Bochkanov Sergey *************************************************************************/
public static void rmatrixbdunpackdiagonals( double[,] b, int m, int n, out bool isupper, out double[] d, out double[] e)
/************************************************************************* Unpacking matrix P which reduces matrix A to bidiagonal form. The subroutine returns transposed matrix P. Input parameters: QP - matrices Q and P in compact form. Output of ToBidiagonal subroutine. M - number of rows in matrix A. N - number of columns in matrix A. TAUP - scalar factors which are used to form P. Output of ToBidiagonal subroutine. PTRows - required number of rows of matrix P^T. N >= PTRows >= 0. Output parameters: PT - first PTRows columns of matrix P^T Array[0..PTRows-1, 0..N-1] If PTRows=0, the array is not modified. -- ALGLIB -- 2005-2010 Bochkanov Sergey *************************************************************************/
public static void rmatrixbdunpackpt( double[,] qp, int m, int n, double[] taup, int ptrows, out double[,] pt)
/************************************************************************* Unpacking matrix Q which reduces a matrix to bidiagonal form. Input parameters: QP - matrices Q and P in compact form. Output of ToBidiagonal subroutine. M - number of rows in matrix A. N - number of columns in matrix A. TAUQ - scalar factors which are used to form Q. Output of ToBidiagonal subroutine. QColumns - required number of columns in matrix Q. M>=QColumns>=0. Output parameters: Q - first QColumns columns of matrix Q. Array[0..M-1, 0..QColumns-1] If QColumns=0, the array is not modified. -- ALGLIB -- 2005-2010 Bochkanov Sergey *************************************************************************/
public static void rmatrixbdunpackq( double[,] qp, int m, int n, double[] tauq, int qcolumns, out double[,] q)
/************************************************************************* Reduction of a square matrix to upper Hessenberg form: Q'*A*Q = H, where Q is an orthogonal matrix, H - Hessenberg matrix. Input parameters: A - matrix A with elements [0..N-1, 0..N-1] N - size of matrix A. Output parameters: A - matrices Q and P in compact form (see below). Tau - array of scalar factors which are used to form matrix Q. Array whose index ranges within [0..N-2] Matrix H is located on the main diagonal, on the lower secondary diagonal and above the main diagonal of matrix A. The elements which are used to form matrix Q are situated in array Tau and below the lower secondary diagonal of matrix A as follows: Matrix Q is represented as a product of elementary reflections Q = H(0)*H(2)*...*H(n-2), where each H(i) is given by H(i) = 1 - tau * v * (v^T) where tau is a scalar stored in Tau[I]; v - is a real vector, so that v(0:i) = 0, v(i+1) = 1, v(i+2:n-1) stored in A(i+2:n-1,i). -- LAPACK routine (version 3.0) -- Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., Courant Institute, Argonne National Lab, and Rice University October 31, 1992 *************************************************************************/
public static void rmatrixhessenberg( ref double[,] a, int n, out double[] tau)
/************************************************************************* Unpacking matrix H (the result of matrix A reduction to upper Hessenberg form) Input parameters: A - output of RMatrixHessenberg subroutine. N - size of matrix A. Output parameters: H - matrix H. Array whose indexes range within [0..N-1, 0..N-1]. -- ALGLIB -- 2005-2010 Bochkanov Sergey *************************************************************************/
public static void rmatrixhessenbergunpackh( double[,] a, int n, out double[,] h)
/************************************************************************* Unpacking matrix Q which reduces matrix A to upper Hessenberg form Input parameters: A - output of RMatrixHessenberg subroutine. N - size of matrix A. Tau - scalar factors which are used to form Q. Output of RMatrixHessenberg subroutine. Output parameters: Q - matrix Q. Array whose indexes range within [0..N-1, 0..N-1]. -- ALGLIB -- 2005-2010 Bochkanov Sergey *************************************************************************/
public static void rmatrixhessenbergunpackq( double[,] a, int n, double[] tau, out double[,] q)
/************************************************************************* LQ decomposition of a rectangular matrix of size MxN Input parameters: A - matrix A whose indexes range within [0..M-1, 0..N-1]. M - number of rows in matrix A. N - number of columns in matrix A. Output parameters: A - matrices L and Q in compact form (see below) Tau - array of scalar factors which are used to form matrix Q. Array whose index ranges within [0..Min(M,N)-1]. Matrix A is represented as A = LQ, where Q is an orthogonal matrix of size MxM, L - lower triangular (or lower trapezoid) matrix of size M x N. The elements of matrix L are located on and below the main diagonal of matrix A. The elements which are located in Tau array and above the main diagonal of matrix A are used to form matrix Q as follows: Matrix Q is represented as a product of elementary reflections Q = H(k-1)*H(k-2)*...*H(1)*H(0), where k = min(m,n), and each H(i) is of the form H(i) = 1 - tau * v * (v^T) where tau is a scalar stored in Tau[I]; v - real vector, so that v(0:i-1)=0, v(i) = 1, v(i+1:n-1) stored in A(i,i+1:n-1). -- ALGLIB routine -- 17.02.2010 Bochkanov Sergey *************************************************************************/
public static void rmatrixlq( ref double[,] a, int m, int n, out double[] tau)
/************************************************************************* Unpacking of matrix L from the LQ decomposition of a matrix A Input parameters: A - matrices Q and L in compact form. Output of RMatrixLQ subroutine. M - number of rows in given matrix A. M>=0. N - number of columns in given matrix A. N>=0. Output parameters: L - matrix L, array[0..M-1, 0..N-1]. -- ALGLIB routine -- 17.02.2010 Bochkanov Sergey *************************************************************************/
public static void rmatrixlqunpackl( double[,] a, int m, int n, out double[,] l)
/************************************************************************* Partial unpacking of matrix Q from the LQ decomposition of a matrix A Input parameters: A - matrices L and Q in compact form. Output of RMatrixLQ subroutine. M - number of rows in given matrix A. M>=0. N - number of columns in given matrix A. N>=0. Tau - scalar factors which are used to form Q. Output of the RMatrixLQ subroutine. QRows - required number of rows in matrix Q. N>=QRows>=0. Output parameters: Q - first QRows rows of matrix Q. Array whose indexes range within [0..QRows-1, 0..N-1]. If QRows=0, the array remains unchanged. -- ALGLIB routine -- 17.02.2010 Bochkanov Sergey *************************************************************************/
public static void rmatrixlqunpackq( double[,] a, int m, int n, double[] tau, int qrows, out double[,] q)
/************************************************************************* QR decomposition of a rectangular matrix of size MxN Input parameters: A - matrix A whose indexes range within [0..M-1, 0..N-1]. M - number of rows in matrix A. N - number of columns in matrix A. Output parameters: A - matrices Q and R in compact form (see below). Tau - array of scalar factors which are used to form matrix Q. Array whose index ranges within [0.. Min(M-1,N-1)]. Matrix A is represented as A = QR, where Q is an orthogonal matrix of size MxM, R - upper triangular (or upper trapezoid) matrix of size M x N. The elements of matrix R are located on and above the main diagonal of matrix A. The elements which are located in Tau array and below the main diagonal of matrix A are used to form matrix Q as follows: Matrix Q is represented as a product of elementary reflections Q = H(0)*H(2)*...*H(k-1), where k = min(m,n), and each H(i) is in the form H(i) = 1 - tau * v * (v^T) where tau is a scalar stored in Tau[I]; v - real vector, so that v(0:i-1) = 0, v(i) = 1, v(i+1:m-1) stored in A(i+1:m-1,i). -- ALGLIB routine -- 17.02.2010 Bochkanov Sergey *************************************************************************/
public static void rmatrixqr( ref double[,] a, int m, int n, out double[] tau)
/************************************************************************* Partial unpacking of matrix Q from the QR decomposition of a matrix A Input parameters: A - matrices Q and R in compact form. Output of RMatrixQR subroutine. M - number of rows in given matrix A. M>=0. N - number of columns in given matrix A. N>=0. Tau - scalar factors which are used to form Q. Output of the RMatrixQR subroutine. QColumns - required number of columns of matrix Q. M>=QColumns>=0. Output parameters: Q - first QColumns columns of matrix Q. Array whose indexes range within [0..M-1, 0..QColumns-1]. If QColumns=0, the array remains unchanged. -- ALGLIB routine -- 17.02.2010 Bochkanov Sergey *************************************************************************/
public static void rmatrixqrunpackq( double[,] a, int m, int n, double[] tau, int qcolumns, out double[,] q)
/************************************************************************* Unpacking of matrix R from the QR decomposition of a matrix A Input parameters: A - matrices Q and R in compact form. Output of RMatrixQR subroutine. M - number of rows in given matrix A. M>=0. N - number of columns in given matrix A. N>=0. Output parameters: R - matrix R, array[0..M-1, 0..N-1]. -- ALGLIB routine -- 17.02.2010 Bochkanov Sergey *************************************************************************/
public static void rmatrixqrunpackr( double[,] a, int m, int n, out double[,] r)
/************************************************************************* Reduction of a symmetric matrix which is given by its higher or lower triangular part to a tridiagonal matrix using orthogonal similarity transformation: Q'*A*Q=T. Input parameters: A - matrix to be transformed array with elements [0..N-1, 0..N-1]. N - size of matrix A. IsUpper - storage format. If IsUpper = True, then matrix A is given by its upper triangle, and the lower triangle is not used and not modified by the algorithm, and vice versa if IsUpper = False. Output parameters: A - matrices T and Q in compact form (see lower) Tau - array of factors which are forming matrices H(i) array with elements [0..N-2]. D - main diagonal of symmetric matrix T. array with elements [0..N-1]. E - secondary diagonal of symmetric matrix T. array with elements [0..N-2]. If IsUpper=True, the matrix Q is represented as a product of elementary reflectors Q = H(n-2) . . . H(2) H(0). Each H(i) has the form H(i) = I - tau * v * v' where tau is a real scalar, and v is a real vector with v(i+1:n-1) = 0, v(i) = 1, v(0:i-1) is stored on exit in A(0:i-1,i+1), and tau in TAU(i). If IsUpper=False, the matrix Q is represented as a product of elementary reflectors Q = H(0) H(2) . . . H(n-2). Each H(i) has the form H(i) = I - tau * v * v' where tau is a real scalar, and v is a real vector with v(0:i) = 0, v(i+1) = 1, v(i+2:n-1) is stored on exit in A(i+2:n-1,i), and tau in TAU(i). The contents of A on exit are illustrated by the following examples with n = 5: if UPLO = 'U': if UPLO = 'L': ( d e v1 v2 v3 ) ( d ) ( d e v2 v3 ) ( e d ) ( d e v3 ) ( v0 e d ) ( d e ) ( v0 v1 e d ) ( d ) ( v0 v1 v2 e d ) where d and e denote diagonal and off-diagonal elements of T, and vi denotes an element of the vector defining H(i). -- LAPACK routine (version 3.0) -- Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., Courant Institute, Argonne National Lab, and Rice University October 31, 1992 *************************************************************************/
public static void smatrixtd( ref double[,] a, int n, bool isupper, out double[] tau, out double[] d, out double[] e)
/************************************************************************* Unpacking matrix Q which reduces symmetric matrix to a tridiagonal form. Input parameters: A - the result of a SMatrixTD subroutine N - size of matrix A. IsUpper - storage format (a parameter of SMatrixTD subroutine) Tau - the result of a SMatrixTD subroutine Output parameters: Q - transformation matrix. array with elements [0..N-1, 0..N-1]. -- ALGLIB -- Copyright 2005-2010 by Bochkanov Sergey *************************************************************************/
public static void smatrixtdunpackq( double[,] a, int n, bool isupper, double[] tau, out double[,] q)
pcabuildbasis
/************************************************************************* Principal components analysis Subroutine builds orthogonal basis where first axis corresponds to direction with maximum variance, second axis maximizes variance in subspace orthogonal to first axis and so on. It should be noted that, unlike LDA, PCA does not use class labels. INPUT PARAMETERS: X - dataset, array[0..NPoints-1,0..NVars-1]. matrix contains ONLY INDEPENDENT VARIABLES. NPoints - dataset size, NPoints>=0 NVars - number of independent variables, NVars>=1 ÂÛÕÎÄÍÛÅ ÏÀÐÀÌÅÒÐÛ: Info - return code: * -4, if SVD subroutine haven't converged * -1, if wrong parameters has been passed (NPoints<0, NVars<1) * 1, if task is solved S2 - array[0..NVars-1]. variance values corresponding to basis vectors. V - array[0..NVars-1,0..NVars-1] matrix, whose columns store basis vectors. -- ALGLIB -- Copyright 25.08.2008 by Bochkanov Sergey *************************************************************************/
public static void pcabuildbasis( double[,] x, int npoints, int nvars, out int info, out double[] s2, out double[,] v)
invpoissondistribution
poissoncdistribution
poissondistribution
/************************************************************************* Inverse Poisson distribution Finds the Poisson variable x such that the integral from 0 to x of the Poisson density is equal to the given probability y. This is accomplished using the inverse gamma integral function and the relation m = igami( k+1, y ). ACCURACY: See inverse incomplete gamma function Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1995, 2000 by Stephen L. Moshier *************************************************************************/
public static double invpoissondistribution(int k, double y)
/************************************************************************* Complemented Poisson distribution Returns the sum of the terms k+1 to infinity of the Poisson distribution: inf. j -- -m m > e -- -- j! j=k+1 The terms are not summed directly; instead the incomplete gamma integral is employed, according to the formula y = pdtrc( k, m ) = igam( k+1, m ). The arguments must both be positive. ACCURACY: See incomplete gamma function Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1995, 2000 by Stephen L. Moshier *************************************************************************/
public static double poissoncdistribution(int k, double m)
/************************************************************************* Poisson distribution Returns the sum of the first k+1 terms of the Poisson distribution: k j -- -m m > e -- -- j! j=0 The terms are not summed directly; instead the incomplete gamma integral is employed, according to the relation y = pdtr( k, m ) = igamc( k+1, m ). The arguments must both be positive. ACCURACY: See incomplete gamma function Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1995, 2000 by Stephen L. Moshier *************************************************************************/
public static double poissondistribution(int k, double m)
polynomialbar2cheb
polynomialbar2pow
polynomialbuild
polynomialbuildcheb1
polynomialbuildcheb2
polynomialbuildeqdist
polynomialcalccheb1
polynomialcalccheb2
polynomialcalceqdist
polynomialcheb2bar
polynomialpow2bar
polint_d_calcdiff Interpolation and differentiation using barycentric representation
polint_d_conv Conversion between power basis and barycentric representation
polint_d_spec Polynomial interpolation on special grids (equidistant, Chebyshev I/II)
/************************************************************************* Conversion from barycentric representation to Chebyshev basis. This function has O(N^2) complexity. INPUT PARAMETERS: P - polynomial in barycentric form A,B - base interval for Chebyshev polynomials (see below) A<>B OUTPUT PARAMETERS T - coefficients of Chebyshev representation; P(x) = sum { T[i]*Ti(2*(x-A)/(B-A)-1), i=0..N-1 }, where Ti - I-th Chebyshev polynomial. NOTES: barycentric interpolant passed as P may be either polynomial obtained from polynomial interpolation/ fitting or rational function which is NOT polynomial. We can't distinguish between these two cases, and this algorithm just tries to work assuming that P IS a polynomial. If not, algorithm will return results, but they won't have any meaning. -- ALGLIB -- Copyright 30.09.2010 by Bochkanov Sergey *************************************************************************/
public static void polynomialbar2cheb( barycentricinterpolant p, double a, double b, out double[] t)
/************************************************************************* Conversion from barycentric representation to power basis. This function has O(N^2) complexity. INPUT PARAMETERS: P - polynomial in barycentric form C - offset (see below); 0.0 is used as default value. S - scale (see below); 1.0 is used as default value. S<>0. OUTPUT PARAMETERS A - coefficients, P(x) = sum { A[i]*((X-C)/S)^i, i=0..N-1 } N - number of coefficients (polynomial degree plus 1) NOTES: 1. this function accepts offset and scale, which can be set to improve numerical properties of polynomial. For example, if P was obtained as result of interpolation on [-1,+1], you can set C=0 and S=1 and represent P as sum of 1, x, x^2, x^3 and so on. In most cases you it is exactly what you need. However, if your interpolation model was built on [999,1001], you will see significant growth of numerical errors when using {1, x, x^2, x^3} as basis. Representing P as sum of 1, (x-1000), (x-1000)^2, (x-1000)^3 will be better option. Such representation can be obtained by using 1000.0 as offset C and 1.0 as scale S. 2. power basis is ill-conditioned and tricks described above can't solve this problem completely. This function will return coefficients in any case, but for N>8 they will become unreliable. However, N's less than 5 are pretty safe. 3. barycentric interpolant passed as P may be either polynomial obtained from polynomial interpolation/ fitting or rational function which is NOT polynomial. We can't distinguish between these two cases, and this algorithm just tries to work assuming that P IS a polynomial. If not, algorithm will return results, but they won't have any meaning. -- ALGLIB -- Copyright 30.09.2010 by Bochkanov Sergey *************************************************************************/
public static void polynomialbar2pow( barycentricinterpolant p, out double[] a) public static void polynomialbar2pow( barycentricinterpolant p, double c, double s, out double[] a)

Examples:   [1]  

/************************************************************************* Lagrange intepolant: generation of the model on the general grid. This function has O(N^2) complexity. INPUT PARAMETERS: X - abscissas, array[0..N-1] Y - function values, array[0..N-1] N - number of points, N>=1 OUTPUT PARAMETERS P - barycentric model which represents Lagrange interpolant (see ratint unit info and BarycentricCalc() description for more information). -- ALGLIB -- Copyright 02.12.2009 by Bochkanov Sergey *************************************************************************/
public static void polynomialbuild( double[] x, double[] y, out barycentricinterpolant p) public static void polynomialbuild( double[] x, double[] y, int n, out barycentricinterpolant p)

Examples:   [1]  

/************************************************************************* Lagrange intepolant on Chebyshev grid (first kind). This function has O(N) complexity. INPUT PARAMETERS: A - left boundary of [A,B] B - right boundary of [A,B] Y - function values at the nodes, array[0..N-1], Y[I] = Y(0.5*(B+A) + 0.5*(B-A)*Cos(PI*(2*i+1)/(2*n))) N - number of points, N>=1 for N=1 a constant model is constructed. OUTPUT PARAMETERS P - barycentric model which represents Lagrange interpolant (see ratint unit info and BarycentricCalc() description for more information). -- ALGLIB -- Copyright 03.12.2009 by Bochkanov Sergey *************************************************************************/
public static void polynomialbuildcheb1( double a, double b, double[] y, out barycentricinterpolant p) public static void polynomialbuildcheb1( double a, double b, double[] y, int n, out barycentricinterpolant p)

Examples:   [1]  

/************************************************************************* Lagrange intepolant on Chebyshev grid (second kind). This function has O(N) complexity. INPUT PARAMETERS: A - left boundary of [A,B] B - right boundary of [A,B] Y - function values at the nodes, array[0..N-1], Y[I] = Y(0.5*(B+A) + 0.5*(B-A)*Cos(PI*i/(n-1))) N - number of points, N>=1 for N=1 a constant model is constructed. OUTPUT PARAMETERS P - barycentric model which represents Lagrange interpolant (see ratint unit info and BarycentricCalc() description for more information). -- ALGLIB -- Copyright 03.12.2009 by Bochkanov Sergey *************************************************************************/
public static void polynomialbuildcheb2( double a, double b, double[] y, out barycentricinterpolant p) public static void polynomialbuildcheb2( double a, double b, double[] y, int n, out barycentricinterpolant p)

Examples:   [1]  

/************************************************************************* Lagrange intepolant: generation of the model on equidistant grid. This function has O(N) complexity. INPUT PARAMETERS: A - left boundary of [A,B] B - right boundary of [A,B] Y - function values at the nodes, array[0..N-1] N - number of points, N>=1 for N=1 a constant model is constructed. OUTPUT PARAMETERS P - barycentric model which represents Lagrange interpolant (see ratint unit info and BarycentricCalc() description for more information). -- ALGLIB -- Copyright 03.12.2009 by Bochkanov Sergey *************************************************************************/
public static void polynomialbuildeqdist( double a, double b, double[] y, out barycentricinterpolant p) public static void polynomialbuildeqdist( double a, double b, double[] y, int n, out barycentricinterpolant p)

Examples:   [1]  

/************************************************************************* Fast polynomial interpolation function on Chebyshev points (first kind) with O(N) complexity. INPUT PARAMETERS: A - left boundary of [A,B] B - right boundary of [A,B] F - function values, array[0..N-1] N - number of points on Chebyshev grid (first kind), X[i] = 0.5*(B+A) + 0.5*(B-A)*Cos(PI*(2*i+1)/(2*n)) for N=1 a constant model is constructed. T - position where P(x) is calculated RESULT value of the Lagrange interpolant at T IMPORTANT this function provides fast interface which is not overflow-safe nor it is very precise. the best option is to use PolIntBuildCheb1()/BarycentricCalc() subroutines unless you are pretty sure that your data will not result in overflow. -- ALGLIB -- Copyright 02.12.2009 by Bochkanov Sergey *************************************************************************/
public static double polynomialcalccheb1( double a, double b, double[] f, double t) public static double polynomialcalccheb1( double a, double b, double[] f, int n, double t)

Examples:   [1]  

/************************************************************************* Fast polynomial interpolation function on Chebyshev points (second kind) with O(N) complexity. INPUT PARAMETERS: A - left boundary of [A,B] B - right boundary of [A,B] F - function values, array[0..N-1] N - number of points on Chebyshev grid (second kind), X[i] = 0.5*(B+A) + 0.5*(B-A)*Cos(PI*i/(n-1)) for N=1 a constant model is constructed. T - position where P(x) is calculated RESULT value of the Lagrange interpolant at T IMPORTANT this function provides fast interface which is not overflow-safe nor it is very precise. the best option is to use PolIntBuildCheb2()/BarycentricCalc() subroutines unless you are pretty sure that your data will not result in overflow. -- ALGLIB -- Copyright 02.12.2009 by Bochkanov Sergey *************************************************************************/
public static double polynomialcalccheb2( double a, double b, double[] f, double t) public static double polynomialcalccheb2( double a, double b, double[] f, int n, double t)

Examples:   [1]  

/************************************************************************* Fast equidistant polynomial interpolation function with O(N) complexity INPUT PARAMETERS: A - left boundary of [A,B] B - right boundary of [A,B] F - function values, array[0..N-1] N - number of points on equidistant grid, N>=1 for N=1 a constant model is constructed. T - position where P(x) is calculated RESULT value of the Lagrange interpolant at T IMPORTANT this function provides fast interface which is not overflow-safe nor it is very precise. the best option is to use PolynomialBuildEqDist()/BarycentricCalc() subroutines unless you are pretty sure that your data will not result in overflow. -- ALGLIB -- Copyright 02.12.2009 by Bochkanov Sergey *************************************************************************/
public static double polynomialcalceqdist( double a, double b, double[] f, double t) public static double polynomialcalceqdist( double a, double b, double[] f, int n, double t)

Examples:   [1]  

/************************************************************************* Conversion from Chebyshev basis to barycentric representation. This function has O(N^2) complexity. INPUT PARAMETERS: T - coefficients of Chebyshev representation; P(x) = sum { T[i]*Ti(2*(x-A)/(B-A)-1), i=0..N }, where Ti - I-th Chebyshev polynomial. N - number of coefficients: * if given, only leading N elements of T are used * if not given, automatically determined from size of T A,B - base interval for Chebyshev polynomials (see above) A<B OUTPUT PARAMETERS P - polynomial in barycentric form -- ALGLIB -- Copyright 30.09.2010 by Bochkanov Sergey *************************************************************************/
public static void polynomialcheb2bar( double[] t, double a, double b, out barycentricinterpolant p) public static void polynomialcheb2bar( double[] t, int n, double a, double b, out barycentricinterpolant p)
/************************************************************************* Conversion from power basis to barycentric representation. This function has O(N^2) complexity. INPUT PARAMETERS: A - coefficients, P(x) = sum { A[i]*((X-C)/S)^i, i=0..N-1 } N - number of coefficients (polynomial degree plus 1) * if given, only leading N elements of A are used * if not given, automatically determined from size of A C - offset (see below); 0.0 is used as default value. S - scale (see below); 1.0 is used as default value. S<>0. OUTPUT PARAMETERS P - polynomial in barycentric form NOTES: 1. this function accepts offset and scale, which can be set to improve numerical properties of polynomial. For example, if you interpolate on [-1,+1], you can set C=0 and S=1 and convert from sum of 1, x, x^2, x^3 and so on. In most cases you it is exactly what you need. However, if your interpolation model was built on [999,1001], you will see significant growth of numerical errors when using {1, x, x^2, x^3} as input basis. Converting from sum of 1, (x-1000), (x-1000)^2, (x-1000)^3 will be better option (you have to specify 1000.0 as offset C and 1.0 as scale S). 2. power basis is ill-conditioned and tricks described above can't solve this problem completely. This function will return barycentric model in any case, but for N>8 accuracy well degrade. However, N's less than 5 are pretty safe. -- ALGLIB -- Copyright 30.09.2010 by Bochkanov Sergey *************************************************************************/
public static void polynomialpow2bar( double[] a, out barycentricinterpolant p) public static void polynomialpow2bar( double[] a, int n, double c, double s, out barycentricinterpolant p)

Examples:   [1]  


public static int Main(string[] args)
{
    //
    // Here we demonstrate polynomial interpolation and differentiation
    // of y=x^2-x sampled at [0,1,2]. Barycentric representation of polynomial is used.
    //
    double[] x = new double[]{0,1,2};
    double[] y = new double[]{0,0,2};
    double t = -1;
    double v;
    double dv;
    double d2v;
    alglib.barycentricinterpolant p;

    // barycentric model is created
    alglib.polynomialbuild(x, y, out p);

    // barycentric interpolation is demonstrated
    v = alglib.barycentriccalc(p, t);
    System.Console.WriteLine("{0:F4", v); // EXPECTED: 2.0

    // barycentric differentation is demonstrated
    alglib.barycentricdiff1(p, t, out v, out dv);
    System.Console.WriteLine("{0:F4", v); // EXPECTED: 2.0
    System.Console.WriteLine("{0:F4", dv); // EXPECTED: -3.0

    // second derivatives with barycentric representation
    alglib.barycentricdiff1(p, t, out v, out dv);
    System.Console.WriteLine("{0:F4", v); // EXPECTED: 2.0
    System.Console.WriteLine("{0:F4", dv); // EXPECTED: -3.0
    alglib.barycentricdiff2(p, t, out v, out dv, out d2v);
    System.Console.WriteLine("{0:F4", v); // EXPECTED: 2.0
    System.Console.WriteLine("{0:F4", dv); // EXPECTED: -3.0
    System.Console.WriteLine("{0:F4", d2v); // EXPECTED: 2.0
    System.Console.ReadLine();
    return 0;
}



public static int Main(string[] args)
{
    //
    // Here we demonstrate conversion of y=x^2-x
    // between power basis and barycentric representation.
    //
    double[] a = new double[]{0,-1,+1};
    double t = 2;
    double[] a2;
    double v;
    alglib.barycentricinterpolant p;

    //
    // a=[0,-1,+1] is decomposition of y=x^2-x in the power basis:
    //
    //     y = 0 - 1*x + 1*x^2
    //
    // We convert it to the barycentric form.
    //
    alglib.polynomialpow2bar(a, out p);

    // now we have barycentric interpolation; we can use it for interpolation
    v = alglib.barycentriccalc(p, t);
    System.Console.WriteLine("{0:F2", v); // EXPECTED: 2.0

    // we can also convert back from barycentric representation to power basis
    alglib.polynomialbar2pow(p, out a2);
    System.Console.WriteLine("{0}", alglib.ap.format(a2,2)); // EXPECTED: [0,-1,+1]
    System.Console.ReadLine();
    return 0;
}



public static int Main(string[] args)
{
    //
    // Temporaries:
    // * values of y=x^2-x sampled at three special grids:
    //   * equdistant grid spanning [0,2],     x[i] = 2*i/(N-1), i=0..N-1
    //   * Chebyshev-I grid spanning [-1,+1],  x[i] = 1 + Cos(PI*(2*i+1)/(2*n)), i=0..N-1
    //   * Chebyshev-II grid spanning [-1,+1], x[i] = 1 + Cos(PI*i/(n-1)), i=0..N-1
    // * barycentric interpolants for these three grids
    // * vectors to store coefficients of quadratic representation
    //
    double[] y_eqdist = new double[]{0,0,2};
    double[] y_cheb1 = new double[]{-0.116025,0.000000,1.616025};
    double[] y_cheb2 = new double[]{0,0,2};
    alglib.barycentricinterpolant p_eqdist;
    alglib.barycentricinterpolant p_cheb1;
    alglib.barycentricinterpolant p_cheb2;
    double[] a_eqdist;
    double[] a_cheb1;
    double[] a_cheb2;

    //
    // First, we demonstrate construction of barycentric interpolants on
    // special grids. We unpack power representation to ensure that
    // interpolant was built correctly.
    //
    // In all three cases we should get same quadratic function.
    //
    alglib.polynomialbuildeqdist(0.0, 2.0, y_eqdist, out p_eqdist);
    alglib.polynomialbar2pow(p_eqdist, out a_eqdist);
    System.Console.WriteLine("{0}", alglib.ap.format(a_eqdist,4)); // EXPECTED: [0,-1,+1]

    alglib.polynomialbuildcheb1(-1, +1, y_cheb1, out p_cheb1);
    alglib.polynomialbar2pow(p_cheb1, out a_cheb1);
    System.Console.WriteLine("{0}", alglib.ap.format(a_cheb1,4)); // EXPECTED: [0,-1,+1]

    alglib.polynomialbuildcheb2(-1, +1, y_cheb2, out p_cheb2);
    alglib.polynomialbar2pow(p_cheb2, out a_cheb2);
    System.Console.WriteLine("{0}", alglib.ap.format(a_cheb2,4)); // EXPECTED: [0,-1,+1]

    //
    // Now we demonstrate polynomial interpolation without construction 
    // of the barycentricinterpolant structure.
    //
    // We calculate interpolant value at x=-2.
    // In all three cases we should get same f=6
    //
    double t = -2;
    double v;
    v = alglib.polynomialcalceqdist(0.0, 2.0, y_eqdist, t);
    System.Console.WriteLine("{0:F4", v); // EXPECTED: 6.0

    v = alglib.polynomialcalccheb1(-1, +1, y_cheb1, t);
    System.Console.WriteLine("{0:F4", v); // EXPECTED: 6.0

    v = alglib.polynomialcalccheb2(-1, +1, y_cheb2, t);
    System.Console.WriteLine("{0:F4", v); // EXPECTED: 6.0
    System.Console.ReadLine();
    return 0;
}


psi
/************************************************************************* Psi (digamma) function d - psi(x) = -- ln | (x) dx is the logarithmic derivative of the gamma function. For integer x, n-1 - psi(n) = -EUL + > 1/k. - k=1 This formula is used for 0 < n <= 10. If x is negative, it is transformed to a positive argument by the reflection formula psi(1-x) = psi(x) + pi cot(pi x). For general positive x, the argument is made greater than 10 using the recurrence psi(x+1) = psi(x) + 1/x. Then the following asymptotic expansion is applied: inf. B - 2k psi(x) = log(x) - 1/2x - > ------- - 2k k=1 2k x where the B2k are Bernoulli numbers. ACCURACY: Relative error (except absolute when |psi| < 1): arithmetic domain # trials peak rms IEEE 0,30 30000 1.3e-15 1.4e-16 IEEE -30,0 40000 1.5e-15 2.2e-16 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1992, 2000 by Stephen L. Moshier *************************************************************************/
public static double psi(double x)
pspline2interpolant
pspline3interpolant
pspline2arclength
pspline2build
pspline2buildperiodic
pspline2calc
pspline2diff
pspline2diff2
pspline2parametervalues
pspline2tangent
pspline3arclength
pspline3build
pspline3buildperiodic
pspline3calc
pspline3diff
pspline3diff2
pspline3parametervalues
pspline3tangent
/************************************************************************* Parametric spline inteprolant: 2-dimensional curve. You should not try to access its members directly - use PSpline2XXXXXXXX() functions instead. *************************************************************************/
public class pspline2interpolant { }
/************************************************************************* Parametric spline inteprolant: 3-dimensional curve. You should not try to access its members directly - use PSpline3XXXXXXXX() functions instead. *************************************************************************/
public class pspline3interpolant { }
/************************************************************************* This function calculates arc length, i.e. length of curve between t=a and t=b. INPUT PARAMETERS: P - parametric spline interpolant A,B - parameter values corresponding to arc ends: * B>A will result in positive length returned * B<A will result in negative length returned RESULT: length of arc starting at T=A and ending at T=B. -- ALGLIB PROJECT -- Copyright 30.05.2010 by Bochkanov Sergey *************************************************************************/
public static double pspline2arclength( pspline2interpolant p, double a, double b)
/************************************************************************* This function builds non-periodic 2-dimensional parametric spline which starts at (X[0],Y[0]) and ends at (X[N-1],Y[N-1]). INPUT PARAMETERS: XY - points, array[0..N-1,0..1]. XY[I,0:1] corresponds to the Ith point. Order of points is important! N - points count, N>=5 for Akima splines, N>=2 for other types of splines. ST - spline type: * 0 Akima spline * 1 parabolically terminated Catmull-Rom spline (Tension=0) * 2 parabolically terminated cubic spline PT - parameterization type: * 0 uniform * 1 chord length * 2 centripetal OUTPUT PARAMETERS: P - parametric spline interpolant NOTES: * this function assumes that there all consequent points are distinct. I.e. (x0,y0)<>(x1,y1), (x1,y1)<>(x2,y2), (x2,y2)<>(x3,y3) and so on. However, non-consequent points may coincide, i.e. we can have (x0,y0)= =(x2,y2). -- ALGLIB PROJECT -- Copyright 28.05.2010 by Bochkanov Sergey *************************************************************************/
public static void pspline2build( double[,] xy, int n, int st, int pt, out pspline2interpolant p)
/************************************************************************* This function builds periodic 2-dimensional parametric spline which starts at (X[0],Y[0]), goes through all points to (X[N-1],Y[N-1]) and then back to (X[0],Y[0]). INPUT PARAMETERS: XY - points, array[0..N-1,0..1]. XY[I,0:1] corresponds to the Ith point. XY[N-1,0:1] must be different from XY[0,0:1]. Order of points is important! N - points count, N>=3 for other types of splines. ST - spline type: * 1 Catmull-Rom spline (Tension=0) with cyclic boundary conditions * 2 cubic spline with cyclic boundary conditions PT - parameterization type: * 0 uniform * 1 chord length * 2 centripetal OUTPUT PARAMETERS: P - parametric spline interpolant NOTES: * this function assumes that there all consequent points are distinct. I.e. (x0,y0)<>(x1,y1), (x1,y1)<>(x2,y2), (x2,y2)<>(x3,y3) and so on. However, non-consequent points may coincide, i.e. we can have (x0,y0)= =(x2,y2). * last point of sequence is NOT equal to the first point. You shouldn't make curve "explicitly periodic" by making them equal. -- ALGLIB PROJECT -- Copyright 28.05.2010 by Bochkanov Sergey *************************************************************************/
public static void pspline2buildperiodic( double[,] xy, int n, int st, int pt, out pspline2interpolant p)
/************************************************************************* This function calculates the value of the parametric spline for a given value of parameter T INPUT PARAMETERS: P - parametric spline interpolant T - point: * T in [0,1] corresponds to interval spanned by points * for non-periodic splines T<0 (or T>1) correspond to parts of the curve before the first (after the last) point * for periodic splines T<0 (or T>1) are projected into [0,1] by making T=T-floor(T). OUTPUT PARAMETERS: X - X-position Y - Y-position -- ALGLIB PROJECT -- Copyright 28.05.2010 by Bochkanov Sergey *************************************************************************/
public static void pspline2calc( pspline2interpolant p, double t, out double x, out double y)
/************************************************************************* This function calculates derivative, i.e. it returns (dX/dT,dY/dT). INPUT PARAMETERS: P - parametric spline interpolant T - point: * T in [0,1] corresponds to interval spanned by points * for non-periodic splines T<0 (or T>1) correspond to parts of the curve before the first (after the last) point * for periodic splines T<0 (or T>1) are projected into [0,1] by making T=T-floor(T). OUTPUT PARAMETERS: X - X-value DX - X-derivative Y - Y-value DY - Y-derivative -- ALGLIB PROJECT -- Copyright 28.05.2010 by Bochkanov Sergey *************************************************************************/
public static void pspline2diff( pspline2interpolant p, double t, out double x, out double dx, out double y, out double dy)
/************************************************************************* This function calculates first and second derivative with respect to T. INPUT PARAMETERS: P - parametric spline interpolant T - point: * T in [0,1] corresponds to interval spanned by points * for non-periodic splines T<0 (or T>1) correspond to parts of the curve before the first (after the last) point * for periodic splines T<0 (or T>1) are projected into [0,1] by making T=T-floor(T). OUTPUT PARAMETERS: X - X-value DX - derivative D2X - second derivative Y - Y-value DY - derivative D2Y - second derivative -- ALGLIB PROJECT -- Copyright 28.05.2010 by Bochkanov Sergey *************************************************************************/
public static void pspline2diff2( pspline2interpolant p, double t, out double x, out double dx, out double d2x, out double y, out double dy, out double d2y)
/************************************************************************* This function returns vector of parameter values correspoding to points. I.e. for P created from (X[0],Y[0])...(X[N-1],Y[N-1]) and U=TValues(P) we have (X[0],Y[0]) = PSpline2Calc(P,U[0]), (X[1],Y[1]) = PSpline2Calc(P,U[1]), (X[2],Y[2]) = PSpline2Calc(P,U[2]), ... INPUT PARAMETERS: P - parametric spline interpolant OUTPUT PARAMETERS: N - array size T - array[0..N-1] NOTES: * for non-periodic splines U[0]=0, U[0]<U[1]<...<U[N-1], U[N-1]=1 * for periodic splines U[0]=0, U[0]<U[1]<...<U[N-1], U[N-1]<1 -- ALGLIB PROJECT -- Copyright 28.05.2010 by Bochkanov Sergey *************************************************************************/
public static void pspline2parametervalues( pspline2interpolant p, out int n, out double[] t)
/************************************************************************* This function calculates tangent vector for a given value of parameter T INPUT PARAMETERS: P - parametric spline interpolant T - point: * T in [0,1] corresponds to interval spanned by points * for non-periodic splines T<0 (or T>1) correspond to parts of the curve before the first (after the last) point * for periodic splines T<0 (or T>1) are projected into [0,1] by making T=T-floor(T). OUTPUT PARAMETERS: X - X-component of tangent vector (normalized) Y - Y-component of tangent vector (normalized) NOTE: X^2+Y^2 is either 1 (for non-zero tangent vector) or 0. -- ALGLIB PROJECT -- Copyright 28.05.2010 by Bochkanov Sergey *************************************************************************/
public static void pspline2tangent( pspline2interpolant p, double t, out double x, out double y)
/************************************************************************* This function calculates arc length, i.e. length of curve between t=a and t=b. INPUT PARAMETERS: P - parametric spline interpolant A,B - parameter values corresponding to arc ends: * B>A will result in positive length returned * B<A will result in negative length returned RESULT: length of arc starting at T=A and ending at T=B. -- ALGLIB PROJECT -- Copyright 30.05.2010 by Bochkanov Sergey *************************************************************************/
public static double pspline3arclength( pspline3interpolant p, double a, double b)
/************************************************************************* This function builds non-periodic 3-dimensional parametric spline which starts at (X[0],Y[0],Z[0]) and ends at (X[N-1],Y[N-1],Z[N-1]). Same as PSpline2Build() function, but for 3D, so we won't duplicate its description here. -- ALGLIB PROJECT -- Copyright 28.05.2010 by Bochkanov Sergey *************************************************************************/
public static void pspline3build( double[,] xy, int n, int st, int pt, out pspline3interpolant p)
/************************************************************************* This function builds periodic 3-dimensional parametric spline which starts at (X[0],Y[0],Z[0]), goes through all points to (X[N-1],Y[N-1],Z[N-1]) and then back to (X[0],Y[0],Z[0]). Same as PSpline2Build() function, but for 3D, so we won't duplicate its description here. -- ALGLIB PROJECT -- Copyright 28.05.2010 by Bochkanov Sergey *************************************************************************/
public static void pspline3buildperiodic( double[,] xy, int n, int st, int pt, out pspline3interpolant p)
/************************************************************************* This function calculates the value of the parametric spline for a given value of parameter T. INPUT PARAMETERS: P - parametric spline interpolant T - point: * T in [0,1] corresponds to interval spanned by points * for non-periodic splines T<0 (or T>1) correspond to parts of the curve before the first (after the last) point * for periodic splines T<0 (or T>1) are projected into [0,1] by making T=T-floor(T). OUTPUT PARAMETERS: X - X-position Y - Y-position Z - Z-position -- ALGLIB PROJECT -- Copyright 28.05.2010 by Bochkanov Sergey *************************************************************************/
public static void pspline3calc( pspline3interpolant p, double t, out double x, out double y, out double z)
/************************************************************************* This function calculates derivative, i.e. it returns (dX/dT,dY/dT,dZ/dT). INPUT PARAMETERS: P - parametric spline interpolant T - point: * T in [0,1] corresponds to interval spanned by points * for non-periodic splines T<0 (or T>1) correspond to parts of the curve before the first (after the last) point * for periodic splines T<0 (or T>1) are projected into [0,1] by making T=T-floor(T). OUTPUT PARAMETERS: X - X-value DX - X-derivative Y - Y-value DY - Y-derivative Z - Z-value DZ - Z-derivative -- ALGLIB PROJECT -- Copyright 28.05.2010 by Bochkanov Sergey *************************************************************************/
public static void pspline3diff( pspline3interpolant p, double t, out double x, out double dx, out double y, out double dy, out double z, out double dz)
/************************************************************************* This function calculates first and second derivative with respect to T. INPUT PARAMETERS: P - parametric spline interpolant T - point: * T in [0,1] corresponds to interval spanned by points * for non-periodic splines T<0 (or T>1) correspond to parts of the curve before the first (after the last) point * for periodic splines T<0 (or T>1) are projected into [0,1] by making T=T-floor(T). OUTPUT PARAMETERS: X - X-value DX - derivative D2X - second derivative Y - Y-value DY - derivative D2Y - second derivative Z - Z-value DZ - derivative D2Z - second derivative -- ALGLIB PROJECT -- Copyright 28.05.2010 by Bochkanov Sergey *************************************************************************/
public static void pspline3diff2( pspline3interpolant p, double t, out double x, out double dx, out double d2x, out double y, out double dy, out double d2y, out double z, out double dz, out double d2z)
/************************************************************************* This function returns vector of parameter values correspoding to points. Same as PSpline2ParameterValues(), but for 3D. -- ALGLIB PROJECT -- Copyright 28.05.2010 by Bochkanov Sergey *************************************************************************/
public static void pspline3parametervalues( pspline3interpolant p, out int n, out double[] t)
/************************************************************************* This function calculates tangent vector for a given value of parameter T INPUT PARAMETERS: P - parametric spline interpolant T - point: * T in [0,1] corresponds to interval spanned by points * for non-periodic splines T<0 (or T>1) correspond to parts of the curve before the first (after the last) point * for periodic splines T<0 (or T>1) are projected into [0,1] by making T=T-floor(T). OUTPUT PARAMETERS: X - X-component of tangent vector (normalized) Y - Y-component of tangent vector (normalized) Z - Z-component of tangent vector (normalized) NOTE: X^2+Y^2+Z^2 is either 1 (for non-zero tangent vector) or 0. -- ALGLIB PROJECT -- Copyright 28.05.2010 by Bochkanov Sergey *************************************************************************/
public static void pspline3tangent( pspline3interpolant p, double t, out double x, out double y, out double z)
barycentricinterpolant
barycentricbuildfloaterhormann
barycentricbuildxyw
barycentriccalc
barycentricdiff1
barycentricdiff2
barycentriclintransx
barycentriclintransy
barycentricunpack
/************************************************************************* Barycentric interpolant. *************************************************************************/
public class barycentricinterpolant { }
/************************************************************************* Rational interpolant without poles The subroutine constructs the rational interpolating function without real poles (see 'Barycentric rational interpolation with no poles and high rates of approximation', Michael S. Floater. and Kai Hormann, for more information on this subject). Input parameters: X - interpolation nodes, array[0..N-1]. Y - function values, array[0..N-1]. N - number of nodes, N>0. D - order of the interpolation scheme, 0 <= D <= N-1. D<0 will cause an error. D>=N it will be replaced with D=N-1. if you don't know what D to choose, use small value about 3-5. Output parameters: B - barycentric interpolant. Note: this algorithm always succeeds and calculates the weights with close to machine precision. -- ALGLIB PROJECT -- Copyright 17.06.2007 by Bochkanov Sergey *************************************************************************/
public static void barycentricbuildfloaterhormann( double[] x, double[] y, int n, int d, out barycentricinterpolant b)
/************************************************************************* Rational interpolant from X/Y/W arrays F(t) = SUM(i=0,n-1,w[i]*f[i]/(t-x[i])) / SUM(i=0,n-1,w[i]/(t-x[i])) INPUT PARAMETERS: X - interpolation nodes, array[0..N-1] F - function values, array[0..N-1] W - barycentric weights, array[0..N-1] N - nodes count, N>0 OUTPUT PARAMETERS: B - barycentric interpolant built from (X, Y, W) -- ALGLIB -- Copyright 17.08.2009 by Bochkanov Sergey *************************************************************************/
public static void barycentricbuildxyw( double[] x, double[] y, double[] w, int n, out barycentricinterpolant b)
/************************************************************************* Rational interpolation using barycentric formula F(t) = SUM(i=0,n-1,w[i]*f[i]/(t-x[i])) / SUM(i=0,n-1,w[i]/(t-x[i])) Input parameters: B - barycentric interpolant built with one of model building subroutines. T - interpolation point Result: barycentric interpolant F(t) -- ALGLIB -- Copyright 17.08.2009 by Bochkanov Sergey *************************************************************************/
public static double barycentriccalc(barycentricinterpolant b, double t)
/************************************************************************* Differentiation of barycentric interpolant: first derivative. Algorithm used in this subroutine is very robust and should not fail until provided with values too close to MaxRealNumber (usually MaxRealNumber/N or greater will overflow). INPUT PARAMETERS: B - barycentric interpolant built with one of model building subroutines. T - interpolation point OUTPUT PARAMETERS: F - barycentric interpolant at T DF - first derivative NOTE -- ALGLIB -- Copyright 17.08.2009 by Bochkanov Sergey *************************************************************************/
public static void barycentricdiff1( barycentricinterpolant b, double t, out double f, out double df)
/************************************************************************* Differentiation of barycentric interpolant: first/second derivatives. INPUT PARAMETERS: B - barycentric interpolant built with one of model building subroutines. T - interpolation point OUTPUT PARAMETERS: F - barycentric interpolant at T DF - first derivative D2F - second derivative NOTE: this algorithm may fail due to overflow/underflor if used on data whose values are close to MaxRealNumber or MinRealNumber. Use more robust BarycentricDiff1() subroutine in such cases. -- ALGLIB -- Copyright 17.08.2009 by Bochkanov Sergey *************************************************************************/
public static void barycentricdiff2( barycentricinterpolant b, double t, out double f, out double df, out double d2f)
/************************************************************************* This subroutine performs linear transformation of the argument. INPUT PARAMETERS: B - rational interpolant in barycentric form CA, CB - transformation coefficients: x = CA*t + CB OUTPUT PARAMETERS: B - transformed interpolant with X replaced by T -- ALGLIB PROJECT -- Copyright 19.08.2009 by Bochkanov Sergey *************************************************************************/
public static void barycentriclintransx( barycentricinterpolant b, double ca, double cb)
/************************************************************************* This subroutine performs linear transformation of the barycentric interpolant. INPUT PARAMETERS: B - rational interpolant in barycentric form CA, CB - transformation coefficients: B2(x) = CA*B(x) + CB OUTPUT PARAMETERS: B - transformed interpolant -- ALGLIB PROJECT -- Copyright 19.08.2009 by Bochkanov Sergey *************************************************************************/
public static void barycentriclintransy( barycentricinterpolant b, double ca, double cb)
/************************************************************************* Extracts X/Y/W arrays from rational interpolant INPUT PARAMETERS: B - barycentric interpolant OUTPUT PARAMETERS: N - nodes count, N>0 X - interpolation nodes, array[0..N-1] F - function values, array[0..N-1] W - barycentric weights, array[0..N-1] -- ALGLIB -- Copyright 17.08.2009 by Bochkanov Sergey *************************************************************************/
public static void barycentricunpack( barycentricinterpolant b, out int n, out double[] x, out double[] y, out double[] w)
cmatrixlurcond1
cmatrixlurcondinf
cmatrixrcond1
cmatrixrcondinf
cmatrixtrrcond1
cmatrixtrrcondinf
hpdmatrixcholeskyrcond
hpdmatrixrcond
rmatrixlurcond1
rmatrixlurcondinf
rmatrixrcond1
rmatrixrcondinf
rmatrixtrrcond1
rmatrixtrrcondinf
spdmatrixcholeskyrcond
spdmatrixrcond
/************************************************************************* Estimate of the condition number of a matrix given by its LU decomposition (1-norm) The algorithm calculates a lower bound of the condition number. In this case, the algorithm does not return a lower bound of the condition number, but an inverse number (to avoid an overflow in case of a singular matrix). Input parameters: LUA - LU decomposition of a matrix in compact form. Output of the CMatrixLU subroutine. N - size of matrix A. Result: 1/LowerBound(cond(A)) NOTE: if k(A) is very large, then matrix is assumed degenerate, k(A)=INF, 0.0 is returned in such cases. *************************************************************************/
public static double cmatrixlurcond1(complex[,] lua, int n)
/************************************************************************* Estimate of the condition number of a matrix given by its LU decomposition (infinity norm). The algorithm calculates a lower bound of the condition number. In this case, the algorithm does not return a lower bound of the condition number, but an inverse number (to avoid an overflow in case of a singular matrix). Input parameters: LUA - LU decomposition of a matrix in compact form. Output of the CMatrixLU subroutine. N - size of matrix A. Result: 1/LowerBound(cond(A)) NOTE: if k(A) is very large, then matrix is assumed degenerate, k(A)=INF, 0.0 is returned in such cases. *************************************************************************/
public static double cmatrixlurcondinf(complex[,] lua, int n)
/************************************************************************* Estimate of a matrix condition number (1-norm) The algorithm calculates a lower bound of the condition number. In this case, the algorithm does not return a lower bound of the condition number, but an inverse number (to avoid an overflow in case of a singular matrix). Input parameters: A - matrix. Array whose indexes range within [0..N-1, 0..N-1]. N - size of matrix A. Result: 1/LowerBound(cond(A)) NOTE: if k(A) is very large, then matrix is assumed degenerate, k(A)=INF, 0.0 is returned in such cases. *************************************************************************/
public static double cmatrixrcond1(complex[,] a, int n)
/************************************************************************* Estimate of a matrix condition number (infinity-norm). The algorithm calculates a lower bound of the condition number. In this case, the algorithm does not return a lower bound of the condition number, but an inverse number (to avoid an overflow in case of a singular matrix). Input parameters: A - matrix. Array whose indexes range within [0..N-1, 0..N-1]. N - size of matrix A. Result: 1/LowerBound(cond(A)) NOTE: if k(A) is very large, then matrix is assumed degenerate, k(A)=INF, 0.0 is returned in such cases. *************************************************************************/
public static double cmatrixrcondinf(complex[,] a, int n)
/************************************************************************* Triangular matrix: estimate of a condition number (1-norm) The algorithm calculates a lower bound of the condition number. In this case, the algorithm does not return a lower bound of the condition number, but an inverse number (to avoid an overflow in case of a singular matrix). Input parameters: A - matrix. Array[0..N-1, 0..N-1]. N - size of A. IsUpper - True, if the matrix is upper triangular. IsUnit - True, if the matrix has a unit diagonal. Result: 1/LowerBound(cond(A)) NOTE: if k(A) is very large, then matrix is assumed degenerate, k(A)=INF, 0.0 is returned in such cases. *************************************************************************/
public static double cmatrixtrrcond1( complex[,] a, int n, bool isupper, bool isunit)
/************************************************************************* Triangular matrix: estimate of a matrix condition number (infinity-norm). The algorithm calculates a lower bound of the condition number. In this case, the algorithm does not return a lower bound of the condition number, but an inverse number (to avoid an overflow in case of a singular matrix). Input parameters: A - matrix. Array whose indexes range within [0..N-1, 0..N-1]. N - size of matrix A. IsUpper - True, if the matrix is upper triangular. IsUnit - True, if the matrix has a unit diagonal. Result: 1/LowerBound(cond(A)) NOTE: if k(A) is very large, then matrix is assumed degenerate, k(A)=INF, 0.0 is returned in such cases. *************************************************************************/
public static double cmatrixtrrcondinf( complex[,] a, int n, bool isupper, bool isunit)
/************************************************************************* Condition number estimate of a Hermitian positive definite matrix given by Cholesky decomposition. The algorithm calculates a lower bound of the condition number. In this case, the algorithm does not return a lower bound of the condition number, but an inverse number (to avoid an overflow in case of a singular matrix). It should be noted that 1-norm and inf-norm condition numbers of symmetric matrices are equal, so the algorithm doesn't take into account the differences between these types of norms. Input parameters: CD - Cholesky decomposition of matrix A, output of SMatrixCholesky subroutine. N - size of matrix A. Result: 1/LowerBound(cond(A)) NOTE: if k(A) is very large, then matrix is assumed degenerate, k(A)=INF, 0.0 is returned in such cases. *************************************************************************/
public static double hpdmatrixcholeskyrcond( complex[,] a, int n, bool isupper)
/************************************************************************* Condition number estimate of a Hermitian positive definite matrix. The algorithm calculates a lower bound of the condition number. In this case, the algorithm does not return a lower bound of the condition number, but an inverse number (to avoid an overflow in case of a singular matrix). It should be noted that 1-norm and inf-norm of condition numbers of symmetric matrices are equal, so the algorithm doesn't take into account the differences between these types of norms. Input parameters: A - Hermitian positive definite matrix which is given by its upper or lower triangle depending on the value of IsUpper. Array with elements [0..N-1, 0..N-1]. N - size of matrix A. IsUpper - storage format. Result: 1/LowerBound(cond(A)), if matrix A is positive definite, -1, if matrix A is not positive definite, and its condition number could not be found by this algorithm. NOTE: if k(A) is very large, then matrix is assumed degenerate, k(A)=INF, 0.0 is returned in such cases. *************************************************************************/
public static double hpdmatrixrcond(complex[,] a, int n, bool isupper)
/************************************************************************* Estimate of the condition number of a matrix given by its LU decomposition (1-norm) The algorithm calculates a lower bound of the condition number. In this case, the algorithm does not return a lower bound of the condition number, but an inverse number (to avoid an overflow in case of a singular matrix). Input parameters: LUA - LU decomposition of a matrix in compact form. Output of the RMatrixLU subroutine. N - size of matrix A. Result: 1/LowerBound(cond(A)) NOTE: if k(A) is very large, then matrix is assumed degenerate, k(A)=INF, 0.0 is returned in such cases. *************************************************************************/
public static double rmatrixlurcond1(double[,] lua, int n)
/************************************************************************* Estimate of the condition number of a matrix given by its LU decomposition (infinity norm). The algorithm calculates a lower bound of the condition number. In this case, the algorithm does not return a lower bound of the condition number, but an inverse number (to avoid an overflow in case of a singular matrix). Input parameters: LUA - LU decomposition of a matrix in compact form. Output of the RMatrixLU subroutine. N - size of matrix A. Result: 1/LowerBound(cond(A)) NOTE: if k(A) is very large, then matrix is assumed degenerate, k(A)=INF, 0.0 is returned in such cases. *************************************************************************/
public static double rmatrixlurcondinf(double[,] lua, int n)
/************************************************************************* Estimate of a matrix condition number (1-norm) The algorithm calculates a lower bound of the condition number. In this case, the algorithm does not return a lower bound of the condition number, but an inverse number (to avoid an overflow in case of a singular matrix). Input parameters: A - matrix. Array whose indexes range within [0..N-1, 0..N-1]. N - size of matrix A. Result: 1/LowerBound(cond(A)) NOTE: if k(A) is very large, then matrix is assumed degenerate, k(A)=INF, 0.0 is returned in such cases. *************************************************************************/
public static double rmatrixrcond1(double[,] a, int n)
/************************************************************************* Estimate of a matrix condition number (infinity-norm). The algorithm calculates a lower bound of the condition number. In this case, the algorithm does not return a lower bound of the condition number, but an inverse number (to avoid an overflow in case of a singular matrix). Input parameters: A - matrix. Array whose indexes range within [0..N-1, 0..N-1]. N - size of matrix A. Result: 1/LowerBound(cond(A)) NOTE: if k(A) is very large, then matrix is assumed degenerate, k(A)=INF, 0.0 is returned in such cases. *************************************************************************/
public static double rmatrixrcondinf(double[,] a, int n)
/************************************************************************* Triangular matrix: estimate of a condition number (1-norm) The algorithm calculates a lower bound of the condition number. In this case, the algorithm does not return a lower bound of the condition number, but an inverse number (to avoid an overflow in case of a singular matrix). Input parameters: A - matrix. Array[0..N-1, 0..N-1]. N - size of A. IsUpper - True, if the matrix is upper triangular. IsUnit - True, if the matrix has a unit diagonal. Result: 1/LowerBound(cond(A)) NOTE: if k(A) is very large, then matrix is assumed degenerate, k(A)=INF, 0.0 is returned in such cases. *************************************************************************/
public static double rmatrixtrrcond1( double[,] a, int n, bool isupper, bool isunit)
/************************************************************************* Triangular matrix: estimate of a matrix condition number (infinity-norm). The algorithm calculates a lower bound of the condition number. In this case, the algorithm does not return a lower bound of the condition number, but an inverse number (to avoid an overflow in case of a singular matrix). Input parameters: A - matrix. Array whose indexes range within [0..N-1, 0..N-1]. N - size of matrix A. IsUpper - True, if the matrix is upper triangular. IsUnit - True, if the matrix has a unit diagonal. Result: 1/LowerBound(cond(A)) NOTE: if k(A) is very large, then matrix is assumed degenerate, k(A)=INF, 0.0 is returned in such cases. *************************************************************************/
public static double rmatrixtrrcondinf( double[,] a, int n, bool isupper, bool isunit)
/************************************************************************* Condition number estimate of a symmetric positive definite matrix given by Cholesky decomposition. The algorithm calculates a lower bound of the condition number. In this case, the algorithm does not return a lower bound of the condition number, but an inverse number (to avoid an overflow in case of a singular matrix). It should be noted that 1-norm and inf-norm condition numbers of symmetric matrices are equal, so the algorithm doesn't take into account the differences between these types of norms. Input parameters: CD - Cholesky decomposition of matrix A, output of SMatrixCholesky subroutine. N - size of matrix A. Result: 1/LowerBound(cond(A)) NOTE: if k(A) is very large, then matrix is assumed degenerate, k(A)=INF, 0.0 is returned in such cases. *************************************************************************/
public static double spdmatrixcholeskyrcond( double[,] a, int n, bool isupper)
/************************************************************************* Condition number estimate of a symmetric positive definite matrix. The algorithm calculates a lower bound of the condition number. In this case, the algorithm does not return a lower bound of the condition number, but an inverse number (to avoid an overflow in case of a singular matrix). It should be noted that 1-norm and inf-norm of condition numbers of symmetric matrices are equal, so the algorithm doesn't take into account the differences between these types of norms. Input parameters: A - symmetric positive definite matrix which is given by its upper or lower triangle depending on the value of IsUpper. Array with elements [0..N-1, 0..N-1]. N - size of matrix A. IsUpper - storage format. Result: 1/LowerBound(cond(A)), if matrix A is positive definite, -1, if matrix A is not positive definite, and its condition number could not be found by this algorithm. NOTE: if k(A) is very large, then matrix is assumed degenerate, k(A)=INF, 0.0 is returned in such cases. *************************************************************************/
public static double spdmatrixrcond(double[,] a, int n, bool isupper)
rmatrixschur
/************************************************************************* Subroutine performing the Schur decomposition of a general matrix by using the QR algorithm with multiple shifts. The source matrix A is represented as S'*A*S = T, where S is an orthogonal matrix (Schur vectors), T - upper quasi-triangular matrix (with blocks of sizes 1x1 and 2x2 on the main diagonal). Input parameters: A - matrix to be decomposed. Array whose indexes range within [0..N-1, 0..N-1]. N - size of A, N>=0. Output parameters: A - contains matrix T. Array whose indexes range within [0..N-1, 0..N-1]. S - contains Schur vectors. Array whose indexes range within [0..N-1, 0..N-1]. Note 1: The block structure of matrix T can be easily recognized: since all the elements below the blocks are zeros, the elements a[i+1,i] which are equal to 0 show the block border. Note 2: The algorithm performance depends on the value of the internal parameter NS of the InternalSchurDecomposition subroutine which defines the number of shifts in the QR algorithm (similarly to the block width in block-matrix algorithms in linear algebra). If you require maximum performance on your machine, it is recommended to adjust this parameter manually. Result: True, if the algorithm has converged and parameters A and S contain the result. False, if the algorithm has not converged. Algorithm implemented on the basis of the DHSEQR subroutine (LAPACK 3.0 library). *************************************************************************/
public static bool rmatrixschur(ref double[,] a, int n, out double[,] s)
smatrixgevd
smatrixgevdreduce
/************************************************************************* Algorithm for solving the following generalized symmetric positive-definite eigenproblem: A*x = lambda*B*x (1) or A*B*x = lambda*x (2) or B*A*x = lambda*x (3). where A is a symmetric matrix, B - symmetric positive-definite matrix. The problem is solved by reducing it to an ordinary symmetric eigenvalue problem. Input parameters: A - symmetric matrix which is given by its upper or lower triangular part. Array whose indexes range within [0..N-1, 0..N-1]. N - size of matrices A and B. IsUpperA - storage format of matrix A. B - symmetric positive-definite matrix which is given by its upper or lower triangular part. Array whose indexes range within [0..N-1, 0..N-1]. IsUpperB - storage format of matrix B. ZNeeded - if ZNeeded is equal to: * 0, the eigenvectors are not returned; * 1, the eigenvectors are returned. ProblemType - if ProblemType is equal to: * 1, the following problem is solved: A*x = lambda*B*x; * 2, the following problem is solved: A*B*x = lambda*x; * 3, the following problem is solved: B*A*x = lambda*x. Output parameters: D - eigenvalues in ascending order. Array whose index ranges within [0..N-1]. Z - if ZNeeded is equal to: * 0, Z hasn’t changed; * 1, Z contains eigenvectors. Array whose indexes range within [0..N-1, 0..N-1]. The eigenvectors are stored in matrix columns. It should be noted that the eigenvectors in such problems do not form an orthogonal system. Result: True, if the problem was solved successfully. False, if the error occurred during the Cholesky decomposition of matrix B (the matrix isn’t positive-definite) or during the work of the iterative algorithm for solving the symmetric eigenproblem. See also the GeneralizedSymmetricDefiniteEVDReduce subroutine. -- ALGLIB -- Copyright 1.28.2006 by Bochkanov Sergey *************************************************************************/
public static bool smatrixgevd( double[,] a, int n, bool isuppera, double[,] b, bool isupperb, int zneeded, int problemtype, out double[] d, out double[,] z)
/************************************************************************* Algorithm for reduction of the following generalized symmetric positive- definite eigenvalue problem: A*x = lambda*B*x (1) or A*B*x = lambda*x (2) or B*A*x = lambda*x (3) to the symmetric eigenvalues problem C*y = lambda*y (eigenvalues of this and the given problems are the same, and the eigenvectors of the given problem could be obtained by multiplying the obtained eigenvectors by the transformation matrix x = R*y). Here A is a symmetric matrix, B - symmetric positive-definite matrix. Input parameters: A - symmetric matrix which is given by its upper or lower triangular part. Array whose indexes range within [0..N-1, 0..N-1]. N - size of matrices A and B. IsUpperA - storage format of matrix A. B - symmetric positive-definite matrix which is given by its upper or lower triangular part. Array whose indexes range within [0..N-1, 0..N-1]. IsUpperB - storage format of matrix B. ProblemType - if ProblemType is equal to: * 1, the following problem is solved: A*x = lambda*B*x; * 2, the following problem is solved: A*B*x = lambda*x; * 3, the following problem is solved: B*A*x = lambda*x. Output parameters: A - symmetric matrix which is given by its upper or lower triangle depending on IsUpperA. Contains matrix C. Array whose indexes range within [0..N-1, 0..N-1]. R - upper triangular or low triangular transformation matrix which is used to obtain the eigenvectors of a given problem as the product of eigenvectors of C (from the right) and matrix R (from the left). If the matrix is upper triangular, the elements below the main diagonal are equal to 0 (and vice versa). Thus, we can perform the multiplication without taking into account the internal structure (which is an easier though less effective way). Array whose indexes range within [0..N-1, 0..N-1]. IsUpperR - type of matrix R (upper or lower triangular). Result: True, if the problem was reduced successfully. False, if the error occurred during the Cholesky decomposition of matrix B (the matrix is not positive-definite). -- ALGLIB -- Copyright 1.28.2006 by Bochkanov Sergey *************************************************************************/
public static bool smatrixgevdreduce( ref double[,] a, int n, bool isuppera, double[,] b, bool isupperb, int problemtype, out double[,] r, out bool isupperr)
spline1dinterpolant
spline1dbuildakima
spline1dbuildcatmullrom
spline1dbuildcubic
spline1dbuildhermite
spline1dbuildlinear
spline1dcalc
spline1dconvcubic
spline1dconvdiff2cubic
spline1dconvdiffcubic
spline1ddiff
spline1dgriddiff2cubic
spline1dgriddiffcubic
spline1dintegrate
spline1dlintransx
spline1dlintransy
spline1dunpack
spline1d_d_convdiff Resampling using cubic splines
spline1d_d_cubic Cubic spline interpolation
spline1d_d_griddiff Differentiation on the grid using cubic splines
spline1d_d_linear Piecewise linear spline interpolation
/************************************************************************* 1-dimensional spline inteprolant *************************************************************************/
public class spline1dinterpolant { }
/************************************************************************* This subroutine builds Akima spline interpolant INPUT PARAMETERS: X - spline nodes, array[0..N-1] Y - function values, array[0..N-1] N - points count (optional): * N>=5 * if given, only first N points are used to build spline * if not given, automatically detected from X/Y sizes (len(X) must be equal to len(Y)) OUTPUT PARAMETERS: C - spline interpolant ORDER OF POINTS Subroutine automatically sorts points, so caller may pass unsorted array. -- ALGLIB PROJECT -- Copyright 24.06.2007 by Bochkanov Sergey *************************************************************************/
public static void spline1dbuildakima( double[] x, double[] y, out spline1dinterpolant c) public static void spline1dbuildakima( double[] x, double[] y, int n, out spline1dinterpolant c)
/************************************************************************* This subroutine builds Catmull-Rom spline interpolant. INPUT PARAMETERS: X - spline nodes, array[0..N-1]. Y - function values, array[0..N-1]. OPTIONAL PARAMETERS: N - points count: * N>=2 * if given, only first N points are used to build spline * if not given, automatically detected from X/Y sizes (len(X) must be equal to len(Y)) BoundType - boundary condition type: * -1 for periodic boundary condition * 0 for parabolically terminated spline (default) Tension - tension parameter: * tension=0 corresponds to classic Catmull-Rom spline (default) * 0<tension<1 corresponds to more general form - cardinal spline OUTPUT PARAMETERS: C - spline interpolant ORDER OF POINTS Subroutine automatically sorts points, so caller may pass unsorted array. PROBLEMS WITH PERIODIC BOUNDARY CONDITIONS: Problems with periodic boundary conditions have Y[first_point]=Y[last_point]. However, this subroutine doesn't require you to specify equal values for the first and last points - it automatically forces them to be equal by copying Y[first_point] (corresponds to the leftmost, minimal X[]) to Y[last_point]. However it is recommended to pass consistent values of Y[], i.e. to make Y[first_point]=Y[last_point]. -- ALGLIB PROJECT -- Copyright 23.06.2007 by Bochkanov Sergey *************************************************************************/
public static void spline1dbuildcatmullrom( double[] x, double[] y, out spline1dinterpolant c) public static void spline1dbuildcatmullrom( double[] x, double[] y, int n, int boundtype, double tension, out spline1dinterpolant c)
/************************************************************************* This subroutine builds cubic spline interpolant. INPUT PARAMETERS: X - spline nodes, array[0..N-1]. Y - function values, array[0..N-1]. OPTIONAL PARAMETERS: N - points count: * N>=2 * if given, only first N points are used to build spline * if not given, automatically detected from X/Y sizes (len(X) must be equal to len(Y)) BoundLType - boundary condition type for the left boundary BoundL - left boundary condition (first or second derivative, depending on the BoundLType) BoundRType - boundary condition type for the right boundary BoundR - right boundary condition (first or second derivative, depending on the BoundRType) OUTPUT PARAMETERS: C - spline interpolant ORDER OF POINTS Subroutine automatically sorts points, so caller may pass unsorted array. SETTING BOUNDARY VALUES: The BoundLType/BoundRType parameters can have the following values: * -1, which corresonds to the periodic (cyclic) boundary conditions. In this case: * both BoundLType and BoundRType must be equal to -1. * BoundL/BoundR are ignored * Y[last] is ignored (it is assumed to be equal to Y[first]). * 0, which corresponds to the parabolically terminated spline (BoundL and/or BoundR are ignored). * 1, which corresponds to the first derivative boundary condition * 2, which corresponds to the second derivative boundary condition * by default, BoundType=0 is used PROBLEMS WITH PERIODIC BOUNDARY CONDITIONS: Problems with periodic boundary conditions have Y[first_point]=Y[last_point]. However, this subroutine doesn't require you to specify equal values for the first and last points - it automatically forces them to be equal by copying Y[first_point] (corresponds to the leftmost, minimal X[]) to Y[last_point]. However it is recommended to pass consistent values of Y[], i.e. to make Y[first_point]=Y[last_point]. -- ALGLIB PROJECT -- Copyright 23.06.2007 by Bochkanov Sergey *************************************************************************/
public static void spline1dbuildcubic( double[] x, double[] y, out spline1dinterpolant c) public static void spline1dbuildcubic( double[] x, double[] y, int n, int boundltype, double boundl, int boundrtype, double boundr, out spline1dinterpolant c)
/************************************************************************* This subroutine builds Hermite spline interpolant. INPUT PARAMETERS: X - spline nodes, array[0..N-1] Y - function values, array[0..N-1] D - derivatives, array[0..N-1] N - points count (optional): * N>=2 * if given, only first N points are used to build spline * if not given, automatically detected from X/Y sizes (len(X) must be equal to len(Y)) OUTPUT PARAMETERS: C - spline interpolant. ORDER OF POINTS Subroutine automatically sorts points, so caller may pass unsorted array. -- ALGLIB PROJECT -- Copyright 23.06.2007 by Bochkanov Sergey *************************************************************************/
public static void spline1dbuildhermite( double[] x, double[] y, double[] d, out spline1dinterpolant c) public static void spline1dbuildhermite( double[] x, double[] y, double[] d, int n, out spline1dinterpolant c)
/************************************************************************* This subroutine builds linear spline interpolant INPUT PARAMETERS: X - spline nodes, array[0..N-1] Y - function values, array[0..N-1] N - points count (optional): * N>=2 * if given, only first N points are used to build spline * if not given, automatically detected from X/Y sizes (len(X) must be equal to len(Y)) OUTPUT PARAMETERS: C - spline interpolant ORDER OF POINTS Subroutine automatically sorts points, so caller may pass unsorted array. -- ALGLIB PROJECT -- Copyright 24.06.2007 by Bochkanov Sergey *************************************************************************/
public static void spline1dbuildlinear( double[] x, double[] y, out spline1dinterpolant c) public static void spline1dbuildlinear( double[] x, double[] y, int n, out spline1dinterpolant c)

Examples:   [1]  [2]  

/************************************************************************* This subroutine calculates the value of the spline at the given point X. INPUT PARAMETERS: C - spline interpolant X - point Result: S(x) -- ALGLIB PROJECT -- Copyright 23.06.2007 by Bochkanov Sergey *************************************************************************/
public static double spline1dcalc(spline1dinterpolant c, double x)

Examples:   [1]  [2]  

/************************************************************************* This function solves following problem: given table y[] of function values at old nodes x[] and new nodes x2[], it calculates and returns table of function values y2[] (calculated at x2[]). This function yields same result as Spline1DBuildCubic() call followed by sequence of Spline1DDiff() calls, but it can be several times faster when called for ordered X[] and X2[]. INPUT PARAMETERS: X - old spline nodes Y - function values X2 - new spline nodes OPTIONAL PARAMETERS: N - points count: * N>=2 * if given, only first N points from X/Y are used * if not given, automatically detected from X/Y sizes (len(X) must be equal to len(Y)) BoundLType - boundary condition type for the left boundary BoundL - left boundary condition (first or second derivative, depending on the BoundLType) BoundRType - boundary condition type for the right boundary BoundR - right boundary condition (first or second derivative, depending on the BoundRType) N2 - new points count: * N2>=2 * if given, only first N2 points from X2 are used * if not given, automatically detected from X2 size OUTPUT PARAMETERS: F2 - function values at X2[] ORDER OF POINTS Subroutine automatically sorts points, so caller may pass unsorted array. Function values are correctly reordered on return, so F2[I] is always equal to S(X2[I]) independently of points order. SETTING BOUNDARY VALUES: The BoundLType/BoundRType parameters can have the following values: * -1, which corresonds to the periodic (cyclic) boundary conditions. In this case: * both BoundLType and BoundRType must be equal to -1. * BoundL/BoundR are ignored * Y[last] is ignored (it is assumed to be equal to Y[first]). * 0, which corresponds to the parabolically terminated spline (BoundL and/or BoundR are ignored). * 1, which corresponds to the first derivative boundary condition * 2, which corresponds to the second derivative boundary condition * by default, BoundType=0 is used PROBLEMS WITH PERIODIC BOUNDARY CONDITIONS: Problems with periodic boundary conditions have Y[first_point]=Y[last_point]. However, this subroutine doesn't require you to specify equal values for the first and last points - it automatically forces them to be equal by copying Y[first_point] (corresponds to the leftmost, minimal X[]) to Y[last_point]. However it is recommended to pass consistent values of Y[], i.e. to make Y[first_point]=Y[last_point]. -- ALGLIB PROJECT -- Copyright 03.09.2010 by Bochkanov Sergey *************************************************************************/
public static void spline1dconvcubic( double[] x, double[] y, double[] x2, out double[] y2) public static void spline1dconvcubic( double[] x, double[] y, int n, int boundltype, double boundl, int boundrtype, double boundr, double[] x2, int n2, out double[] y2)

Examples:   [1]  

/************************************************************************* This function solves following problem: given table y[] of function values at old nodes x[] and new nodes x2[], it calculates and returns table of function values y2[], first and second derivatives d2[] and dd2[] (calculated at x2[]). This function yields same result as Spline1DBuildCubic() call followed by sequence of Spline1DDiff() calls, but it can be several times faster when called for ordered X[] and X2[]. INPUT PARAMETERS: X - old spline nodes Y - function values X2 - new spline nodes OPTIONAL PARAMETERS: N - points count: * N>=2 * if given, only first N points from X/Y are used * if not given, automatically detected from X/Y sizes (len(X) must be equal to len(Y)) BoundLType - boundary condition type for the left boundary BoundL - left boundary condition (first or second derivative, depending on the BoundLType) BoundRType - boundary condition type for the right boundary BoundR - right boundary condition (first or second derivative, depending on the BoundRType) N2 - new points count: * N2>=2 * if given, only first N2 points from X2 are used * if not given, automatically detected from X2 size OUTPUT PARAMETERS: F2 - function values at X2[] D2 - first derivatives at X2[] DD2 - second derivatives at X2[] ORDER OF POINTS Subroutine automatically sorts points, so caller may pass unsorted array. Function values are correctly reordered on return, so F2[I] is always equal to S(X2[I]) independently of points order. SETTING BOUNDARY VALUES: The BoundLType/BoundRType parameters can have the following values: * -1, which corresonds to the periodic (cyclic) boundary conditions. In this case: * both BoundLType and BoundRType must be equal to -1. * BoundL/BoundR are ignored * Y[last] is ignored (it is assumed to be equal to Y[first]). * 0, which corresponds to the parabolically terminated spline (BoundL and/or BoundR are ignored). * 1, which corresponds to the first derivative boundary condition * 2, which corresponds to the second derivative boundary condition * by default, BoundType=0 is used PROBLEMS WITH PERIODIC BOUNDARY CONDITIONS: Problems with periodic boundary conditions have Y[first_point]=Y[last_point]. However, this subroutine doesn't require you to specify equal values for the first and last points - it automatically forces them to be equal by copying Y[first_point] (corresponds to the leftmost, minimal X[]) to Y[last_point]. However it is recommended to pass consistent values of Y[], i.e. to make Y[first_point]=Y[last_point]. -- ALGLIB PROJECT -- Copyright 03.09.2010 by Bochkanov Sergey *************************************************************************/
public static void spline1dconvdiff2cubic( double[] x, double[] y, double[] x2, out double[] y2, out double[] d2, out double[] dd2) public static void spline1dconvdiff2cubic( double[] x, double[] y, int n, int boundltype, double boundl, int boundrtype, double boundr, double[] x2, int n2, out double[] y2, out double[] d2, out double[] dd2)

Examples:   [1]  

/************************************************************************* This function solves following problem: given table y[] of function values at old nodes x[] and new nodes x2[], it calculates and returns table of function values y2[] and derivatives d2[] (calculated at x2[]). This function yields same result as Spline1DBuildCubic() call followed by sequence of Spline1DDiff() calls, but it can be several times faster when called for ordered X[] and X2[]. INPUT PARAMETERS: X - old spline nodes Y - function values X2 - new spline nodes OPTIONAL PARAMETERS: N - points count: * N>=2 * if given, only first N points from X/Y are used * if not given, automatically detected from X/Y sizes (len(X) must be equal to len(Y)) BoundLType - boundary condition type for the left boundary BoundL - left boundary condition (first or second derivative, depending on the BoundLType) BoundRType - boundary condition type for the right boundary BoundR - right boundary condition (first or second derivative, depending on the BoundRType) N2 - new points count: * N2>=2 * if given, only first N2 points from X2 are used * if not given, automatically detected from X2 size OUTPUT PARAMETERS: F2 - function values at X2[] D2 - first derivatives at X2[] ORDER OF POINTS Subroutine automatically sorts points, so caller may pass unsorted array. Function values are correctly reordered on return, so F2[I] is always equal to S(X2[I]) independently of points order. SETTING BOUNDARY VALUES: The BoundLType/BoundRType parameters can have the following values: * -1, which corresonds to the periodic (cyclic) boundary conditions. In this case: * both BoundLType and BoundRType must be equal to -1. * BoundL/BoundR are ignored * Y[last] is ignored (it is assumed to be equal to Y[first]). * 0, which corresponds to the parabolically terminated spline (BoundL and/or BoundR are ignored). * 1, which corresponds to the first derivative boundary condition * 2, which corresponds to the second derivative boundary condition * by default, BoundType=0 is used PROBLEMS WITH PERIODIC BOUNDARY CONDITIONS: Problems with periodic boundary conditions have Y[first_point]=Y[last_point]. However, this subroutine doesn't require you to specify equal values for the first and last points - it automatically forces them to be equal by copying Y[first_point] (corresponds to the leftmost, minimal X[]) to Y[last_point]. However it is recommended to pass consistent values of Y[], i.e. to make Y[first_point]=Y[last_point]. -- ALGLIB PROJECT -- Copyright 03.09.2010 by Bochkanov Sergey *************************************************************************/
public static void spline1dconvdiffcubic( double[] x, double[] y, double[] x2, out double[] y2, out double[] d2) public static void spline1dconvdiffcubic( double[] x, double[] y, int n, int boundltype, double boundl, int boundrtype, double boundr, double[] x2, int n2, out double[] y2, out double[] d2)

Examples:   [1]  

/************************************************************************* This subroutine differentiates the spline. INPUT PARAMETERS: C - spline interpolant. X - point Result: S - S(x) DS - S'(x) D2S - S''(x) -- ALGLIB PROJECT -- Copyright 24.06.2007 by Bochkanov Sergey *************************************************************************/
public static void spline1ddiff( spline1dinterpolant c, double x, out double s, out double ds, out double d2s)
/************************************************************************* This function solves following problem: given table y[] of function values at nodes x[], it calculates and returns tables of first and second function derivatives d1[] and d2[] (calculated at the same nodes x[]). This function yields same result as Spline1DBuildCubic() call followed by sequence of Spline1DDiff() calls, but it can be several times faster when called for ordered X[] and X2[]. INPUT PARAMETERS: X - spline nodes Y - function values OPTIONAL PARAMETERS: N - points count: * N>=2 * if given, only first N points are used * if not given, automatically detected from X/Y sizes (len(X) must be equal to len(Y)) BoundLType - boundary condition type for the left boundary BoundL - left boundary condition (first or second derivative, depending on the BoundLType) BoundRType - boundary condition type for the right boundary BoundR - right boundary condition (first or second derivative, depending on the BoundRType) OUTPUT PARAMETERS: D1 - S' values at X[] D2 - S'' values at X[] ORDER OF POINTS Subroutine automatically sorts points, so caller may pass unsorted array. Derivative values are correctly reordered on return, so D[I] is always equal to S'(X[I]) independently of points order. SETTING BOUNDARY VALUES: The BoundLType/BoundRType parameters can have the following values: * -1, which corresonds to the periodic (cyclic) boundary conditions. In this case: * both BoundLType and BoundRType must be equal to -1. * BoundL/BoundR are ignored * Y[last] is ignored (it is assumed to be equal to Y[first]). * 0, which corresponds to the parabolically terminated spline (BoundL and/or BoundR are ignored). * 1, which corresponds to the first derivative boundary condition * 2, which corresponds to the second derivative boundary condition * by default, BoundType=0 is used PROBLEMS WITH PERIODIC BOUNDARY CONDITIONS: Problems with periodic boundary conditions have Y[first_point]=Y[last_point]. However, this subroutine doesn't require you to specify equal values for the first and last points - it automatically forces them to be equal by copying Y[first_point] (corresponds to the leftmost, minimal X[]) to Y[last_point]. However it is recommended to pass consistent values of Y[], i.e. to make Y[first_point]=Y[last_point]. -- ALGLIB PROJECT -- Copyright 03.09.2010 by Bochkanov Sergey *************************************************************************/
public static void spline1dgriddiff2cubic( double[] x, double[] y, out double[] d1, out double[] d2) public static void spline1dgriddiff2cubic( double[] x, double[] y, int n, int boundltype, double boundl, int boundrtype, double boundr, out double[] d1, out double[] d2)

Examples:   [1]  

/************************************************************************* This function solves following problem: given table y[] of function values at nodes x[], it calculates and returns table of function derivatives d[] (calculated at the same nodes x[]). This function yields same result as Spline1DBuildCubic() call followed by sequence of Spline1DDiff() calls, but it can be several times faster when called for ordered X[] and X2[]. INPUT PARAMETERS: X - spline nodes Y - function values OPTIONAL PARAMETERS: N - points count: * N>=2 * if given, only first N points are used * if not given, automatically detected from X/Y sizes (len(X) must be equal to len(Y)) BoundLType - boundary condition type for the left boundary BoundL - left boundary condition (first or second derivative, depending on the BoundLType) BoundRType - boundary condition type for the right boundary BoundR - right boundary condition (first or second derivative, depending on the BoundRType) OUTPUT PARAMETERS: D - derivative values at X[] ORDER OF POINTS Subroutine automatically sorts points, so caller may pass unsorted array. Derivative values are correctly reordered on return, so D[I] is always equal to S'(X[I]) independently of points order. SETTING BOUNDARY VALUES: The BoundLType/BoundRType parameters can have the following values: * -1, which corresonds to the periodic (cyclic) boundary conditions. In this case: * both BoundLType and BoundRType must be equal to -1. * BoundL/BoundR are ignored * Y[last] is ignored (it is assumed to be equal to Y[first]). * 0, which corresponds to the parabolically terminated spline (BoundL and/or BoundR are ignored). * 1, which corresponds to the first derivative boundary condition * 2, which corresponds to the second derivative boundary condition * by default, BoundType=0 is used PROBLEMS WITH PERIODIC BOUNDARY CONDITIONS: Problems with periodic boundary conditions have Y[first_point]=Y[last_point]. However, this subroutine doesn't require you to specify equal values for the first and last points - it automatically forces them to be equal by copying Y[first_point] (corresponds to the leftmost, minimal X[]) to Y[last_point]. However it is recommended to pass consistent values of Y[], i.e. to make Y[first_point]=Y[last_point]. -- ALGLIB PROJECT -- Copyright 03.09.2010 by Bochkanov Sergey *************************************************************************/
public static void spline1dgriddiffcubic( double[] x, double[] y, out double[] d) public static void spline1dgriddiffcubic( double[] x, double[] y, int n, int boundltype, double boundl, int boundrtype, double boundr, out double[] d)

Examples:   [1]  

/************************************************************************* This subroutine integrates the spline. INPUT PARAMETERS: C - spline interpolant. X - right bound of the integration interval [a, x], here 'a' denotes min(x[]) Result: integral(S(t)dt,a,x) -- ALGLIB PROJECT -- Copyright 23.06.2007 by Bochkanov Sergey *************************************************************************/
public static double spline1dintegrate(spline1dinterpolant c, double x)
/************************************************************************* This subroutine performs linear transformation of the spline argument. INPUT PARAMETERS: C - spline interpolant. A, B- transformation coefficients: x = A*t + B Result: C - transformed spline -- ALGLIB PROJECT -- Copyright 30.06.2007 by Bochkanov Sergey *************************************************************************/
public static void spline1dlintransx( spline1dinterpolant c, double a, double b)
/************************************************************************* This subroutine performs linear transformation of the spline. INPUT PARAMETERS: C - spline interpolant. A, B- transformation coefficients: S2(x) = A*S(x) + B Result: C - transformed spline -- ALGLIB PROJECT -- Copyright 30.06.2007 by Bochkanov Sergey *************************************************************************/
public static void spline1dlintransy( spline1dinterpolant c, double a, double b)
/************************************************************************* This subroutine unpacks the spline into the coefficients table. INPUT PARAMETERS: C - spline interpolant. X - point Result: Tbl - coefficients table, unpacked format, array[0..N-2, 0..5]. For I = 0...N-2: Tbl[I,0] = X[i] Tbl[I,1] = X[i+1] Tbl[I,2] = C0 Tbl[I,3] = C1 Tbl[I,4] = C2 Tbl[I,5] = C3 On [x[i], x[i+1]] spline is equals to: S(x) = C0 + C1*t + C2*t^2 + C3*t^3 t = x-x[i] -- ALGLIB PROJECT -- Copyright 29.06.2007 by Bochkanov Sergey *************************************************************************/
public static void spline1dunpack( spline1dinterpolant c, out int n, out double[,] tbl)

public static int Main(string[] args)
{
    //
    // We use cubic spline to do resampling, i.e. having
    // values of f(x)=x^2 sampled at 5 equidistant nodes on [-1,+1]
    // we calculate values/derivatives of cubic spline on 
    // another grid (equidistant with 9 nodes on [-1,+1])
    // WITHOUT CONSTRUCTION OF SPLINE OBJECT.
    //
    // There are efficient functions spline1dconvcubic(),
    // spline1dconvdiffcubic() and spline1dconvdiff2cubic() 
    // for such calculations.
    //
    // We use default boundary conditions ("parabolically terminated
    // spline") because cubic spline built with such boundary conditions 
    // will exactly reproduce any quadratic f(x).
    //
    // Actually, we could use natural conditions, but we feel that 
    // spline which exactly reproduces f() will show us more 
    // understandable results.
    //
    double[] x_old = new double[]{-1.0,-0.5,0.0,+0.5,+1.0};
    double[] y_old = new double[]{+1.0,0.25,0.0,0.25,+1.0};
    double[] x_new = new double[]{-1.00,-0.75,-0.50,-0.25,0.00,+0.25,+0.50,+0.75,+1.00};
    double[] y_new;
    double[] d1_new;
    double[] d2_new;

    //
    // First, conversion without differentiation.
    //
    //
    alglib.spline1dconvcubic(x_old, y_old, x_new, out y_new);
    System.Console.WriteLine("{0}", alglib.ap.format(y_new,3)); // EXPECTED: [1.0000, 0.5625, 0.2500, 0.0625, 0.0000, 0.0625, 0.2500, 0.5625, 1.0000]

    //
    // Then, conversion with differentiation (first derivatives only)
    //
    //
    alglib.spline1dconvdiffcubic(x_old, y_old, x_new, out y_new, out d1_new);
    System.Console.WriteLine("{0}", alglib.ap.format(y_new,3)); // EXPECTED: [1.0000, 0.5625, 0.2500, 0.0625, 0.0000, 0.0625, 0.2500, 0.5625, 1.0000]
    System.Console.WriteLine("{0}", alglib.ap.format(d1_new,3)); // EXPECTED: [-2.0, -1.5, -1.0, -0.5, 0.0, 0.5, 1.0, 1.5, 2.0]

    //
    // Finally, conversion with first and second derivatives
    //
    //
    alglib.spline1dconvdiff2cubic(x_old, y_old, x_new, out y_new, out d1_new, out d2_new);
    System.Console.WriteLine("{0}", alglib.ap.format(y_new,3)); // EXPECTED: [1.0000, 0.5625, 0.2500, 0.0625, 0.0000, 0.0625, 0.2500, 0.5625, 1.0000]
    System.Console.WriteLine("{0}", alglib.ap.format(d1_new,3)); // EXPECTED: [-2.0, -1.5, -1.0, -0.5, 0.0, 0.5, 1.0, 1.5, 2.0]
    System.Console.WriteLine("{0}", alglib.ap.format(d2_new,3)); // EXPECTED: [2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0]
    System.Console.ReadLine();
    return 0;
}



public static int Main(string[] args)
{
    //
    // We use cubic spline to interpolate f(x)=x^2 sampled 
    // at 5 equidistant nodes on [-1,+1].
    //
    // First, we use default boundary conditions ("parabolically terminated
    // spline") because cubic spline built with such boundary conditions 
    // will exactly reproduce any quadratic f(x).
    //
    // Then we try to use natural boundary conditions
    //     d2S(-1)/dx^2 = 0.0
    //     d2S(+1)/dx^2 = 0.0
    // and see that such spline interpolated f(x) with small error.
    //
    double[] x = new double[]{-1.0,-0.5,0.0,+0.5,+1.0};
    double[] y = new double[]{+1.0,0.25,0.0,0.25,+1.0};
    double t = 0.25;
    double v;
    alglib.spline1dinterpolant s;
    int natural_bound_type = 2;
    //
    // Test exact boundary conditions: build S(x), calculare S(0.25)
    // (almost same as original function)
    //
    alglib.spline1dbuildcubic(x, y, out s);
    v = alglib.spline1dcalc(s, t);
    System.Console.WriteLine("{0:F4", v); // EXPECTED: 0.0625

    //
    // Test natural boundary conditions: build S(x), calculare S(0.25)
    // (small interpolation error)
    //
    alglib.spline1dbuildcubic(x, y, 5, natural_bound_type, 0.0, natural_bound_type, 0.0, out s);
    v = alglib.spline1dcalc(s, t);
    System.Console.WriteLine("{0:F3", v); // EXPECTED: 0.0580
    System.Console.ReadLine();
    return 0;
}



public static int Main(string[] args)
{
    //
    // We use cubic spline to do grid differentiation, i.e. having
    // values of f(x)=x^2 sampled at 5 equidistant nodes on [-1,+1]
    // we calculate derivatives of cubic spline at nodes WITHOUT
    // CONSTRUCTION OF SPLINE OBJECT.
    //
    // There are efficient functions spline1dgriddiffcubic() and
    // spline1dgriddiff2cubic() for such calculations.
    //
    // We use default boundary conditions ("parabolically terminated
    // spline") because cubic spline built with such boundary conditions 
    // will exactly reproduce any quadratic f(x).
    //
    // Actually, we could use natural conditions, but we feel that 
    // spline which exactly reproduces f() will show us more 
    // understandable results.
    //
    double[] x = new double[]{-1.0,-0.5,0.0,+0.5,+1.0};
    double[] y = new double[]{+1.0,0.25,0.0,0.25,+1.0};
    double[] d1;
    double[] d2;

    //
    // We calculate first derivatives: they must be equal to 2*x
    //
    alglib.spline1dgriddiffcubic(x, y, out d1);
    System.Console.WriteLine("{0}", alglib.ap.format(d1,3)); // EXPECTED: [-2.0, -1.0, 0.0, +1.0, +2.0]

    //
    // Now test griddiff2, which returns first AND second derivatives.
    // First derivative is 2*x, second is equal to 2.0
    //
    alglib.spline1dgriddiff2cubic(x, y, out d1, out d2);
    System.Console.WriteLine("{0}", alglib.ap.format(d1,3)); // EXPECTED: [-2.0, -1.0, 0.0, +1.0, +2.0]
    System.Console.WriteLine("{0}", alglib.ap.format(d2,3)); // EXPECTED: [ 2.0,  2.0, 2.0,  2.0,  2.0]
    System.Console.ReadLine();
    return 0;
}



public static int Main(string[] args)
{
    //
    // We use piecewise linear spline to interpolate f(x)=x^2 sampled 
    // at 5 equidistant nodes on [-1,+1].
    //
    double[] x = new double[]{-1.0,-0.5,0.0,+0.5,+1.0};
    double[] y = new double[]{+1.0,0.25,0.0,0.25,+1.0};
    double t = 0.25;
    double v;
    alglib.spline1dinterpolant s;

    // build spline
    alglib.spline1dbuildlinear(x, y, out s);

    // calculate S(0.25) - it is quite different from 0.25^2=0.0625
    v = alglib.spline1dcalc(s, t);
    System.Console.WriteLine("{0:F4", v); // EXPECTED: 0.125
    System.Console.ReadLine();
    return 0;
}


spline2dinterpolant
spline2dbuildbicubic
spline2dbuildbilinear
spline2dcalc
spline2ddiff
spline2dlintransf
spline2dlintransxy
spline2dresamplebicubic
spline2dresamplebilinear
spline2dunpack
/************************************************************************* 2-dimensional spline inteprolant *************************************************************************/
public class spline2dinterpolant { }
/************************************************************************* This subroutine builds bicubic spline coefficients table. Input parameters: X - spline abscissas, array[0..N-1] Y - spline ordinates, array[0..M-1] F - function values, array[0..M-1,0..N-1] M,N - grid size, M>=2, N>=2 Output parameters: C - spline interpolant -- ALGLIB PROJECT -- Copyright 05.07.2007 by Bochkanov Sergey *************************************************************************/
public static void spline2dbuildbicubic( double[] x, double[] y, double[,] f, int m, int n, out spline2dinterpolant c)
/************************************************************************* This subroutine builds bilinear spline coefficients table. Input parameters: X - spline abscissas, array[0..N-1] Y - spline ordinates, array[0..M-1] F - function values, array[0..M-1,0..N-1] M,N - grid size, M>=2, N>=2 Output parameters: C - spline interpolant -- ALGLIB PROJECT -- Copyright 05.07.2007 by Bochkanov Sergey *************************************************************************/
public static void spline2dbuildbilinear( double[] x, double[] y, double[,] f, int m, int n, out spline2dinterpolant c)
/************************************************************************* This subroutine calculates the value of the bilinear or bicubic spline at the given point X. Input parameters: C - coefficients table. Built by BuildBilinearSpline or BuildBicubicSpline. X, Y- point Result: S(x,y) -- ALGLIB PROJECT -- Copyright 05.07.2007 by Bochkanov Sergey *************************************************************************/
public static double spline2dcalc( spline2dinterpolant c, double x, double y)
/************************************************************************* This subroutine calculates the value of the bilinear or bicubic spline at the given point X and its derivatives. Input parameters: C - spline interpolant. X, Y- point Output parameters: F - S(x,y) FX - dS(x,y)/dX FY - dS(x,y)/dY FXY - d2S(x,y)/dXdY -- ALGLIB PROJECT -- Copyright 05.07.2007 by Bochkanov Sergey *************************************************************************/
public static void spline2ddiff( spline2dinterpolant c, double x, double y, out double f, out double fx, out double fy, out double fxy)
/************************************************************************* This subroutine performs linear transformation of the spline. Input parameters: C - spline interpolant. A, B- transformation coefficients: S2(x,y) = A*S(x,y) + B Output parameters: C - transformed spline -- ALGLIB PROJECT -- Copyright 30.06.2007 by Bochkanov Sergey *************************************************************************/
public static void spline2dlintransf( spline2dinterpolant c, double a, double b)
/************************************************************************* This subroutine performs linear transformation of the spline argument. Input parameters: C - spline interpolant AX, BX - transformation coefficients: x = A*t + B AY, BY - transformation coefficients: y = A*u + B Result: C - transformed spline -- ALGLIB PROJECT -- Copyright 30.06.2007 by Bochkanov Sergey *************************************************************************/
public static void spline2dlintransxy( spline2dinterpolant c, double ax, double bx, double ay, double by)
/************************************************************************* Bicubic spline resampling Input parameters: A - function values at the old grid, array[0..OldHeight-1, 0..OldWidth-1] OldHeight - old grid height, OldHeight>1 OldWidth - old grid width, OldWidth>1 NewHeight - new grid height, NewHeight>1 NewWidth - new grid width, NewWidth>1 Output parameters: B - function values at the new grid, array[0..NewHeight-1, 0..NewWidth-1] -- ALGLIB routine -- 15 May, 2007 Copyright by Bochkanov Sergey *************************************************************************/
public static void spline2dresamplebicubic( double[,] a, int oldheight, int oldwidth, out double[,] b, int newheight, int newwidth)
/************************************************************************* Bilinear spline resampling Input parameters: A - function values at the old grid, array[0..OldHeight-1, 0..OldWidth-1] OldHeight - old grid height, OldHeight>1 OldWidth - old grid width, OldWidth>1 NewHeight - new grid height, NewHeight>1 NewWidth - new grid width, NewWidth>1 Output parameters: B - function values at the new grid, array[0..NewHeight-1, 0..NewWidth-1] -- ALGLIB routine -- 09.07.2007 Copyright by Bochkanov Sergey *************************************************************************/
public static void spline2dresamplebilinear( double[,] a, int oldheight, int oldwidth, out double[,] b, int newheight, int newwidth)
/************************************************************************* This subroutine unpacks two-dimensional spline into the coefficients table Input parameters: C - spline interpolant. Result: M, N- grid size (x-axis and y-axis) Tbl - coefficients table, unpacked format, [0..(N-1)*(M-1)-1, 0..19]. For I = 0...M-2, J=0..N-2: K = I*(N-1)+J Tbl[K,0] = X[j] Tbl[K,1] = X[j+1] Tbl[K,2] = Y[i] Tbl[K,3] = Y[i+1] Tbl[K,4] = C00 Tbl[K,5] = C01 Tbl[K,6] = C02 Tbl[K,7] = C03 Tbl[K,8] = C10 Tbl[K,9] = C11 ... Tbl[K,19] = C33 On each grid square spline is equals to: S(x) = SUM(c[i,j]*(x^i)*(y^j), i=0..3, j=0..3) t = x-x[j] u = y-y[i] -- ALGLIB PROJECT -- Copyright 29.06.2007 by Bochkanov Sergey *************************************************************************/
public static void spline2dunpack( spline2dinterpolant c, out int m, out int n, out double[,] tbl)
onesamplesigntest
/************************************************************************* Sign test This test checks three hypotheses about the median of the given sample. The following tests are performed: * two-tailed test (null hypothesis - the median is equal to the given value) * left-tailed test (null hypothesis - the median is greater than or equal to the given value) * right-tailed test (null hypothesis - the median is less than or equal to the given value) Requirements: * the scale of measurement should be ordinal, interval or ratio (i.e. the test could not be applied to nominal variables). The test is non-parametric and doesn't require distribution X to be normal Input parameters: X - sample. Array whose index goes from 0 to N-1. N - size of the sample. Median - assumed median value. Output parameters: BothTails - p-value for two-tailed test. If BothTails is less than the given significance level the null hypothesis is rejected. LeftTail - p-value for left-tailed test. If LeftTail is less than the given significance level, the null hypothesis is rejected. RightTail - p-value for right-tailed test. If RightTail is less than the given significance level the null hypothesis is rejected. While calculating p-values high-precision binomial distribution approximation is used, so significance levels have about 15 exact digits. -- ALGLIB -- Copyright 08.09.2006 by Bochkanov Sergey *************************************************************************/
public static void onesamplesigntest( double[] x, int n, double median, out double bothtails, out double lefttail, out double righttail)
invstudenttdistribution
studenttdistribution
/************************************************************************* Functional inverse of Student's t distribution Given probability p, finds the argument t such that stdtr(k,t) is equal to p. ACCURACY: Tested at random 1 <= k <= 100. The "domain" refers to p: Relative error: arithmetic domain # trials peak rms IEEE .001,.999 25000 5.7e-15 8.0e-16 IEEE 10^-6,.001 25000 2.0e-12 2.9e-14 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1995, 2000 by Stephen L. Moshier *************************************************************************/
public static double invstudenttdistribution(int k, double p)
/************************************************************************* Student's t distribution Computes the integral from minus infinity to t of the Student t distribution with integer k > 0 degrees of freedom: t - | | - | 2 -(k+1)/2 | ( (k+1)/2 ) | ( x ) ---------------------- | ( 1 + --- ) dx - | ( k ) sqrt( k pi ) | ( k/2 ) | | | - -inf. Relation to incomplete beta integral: 1 - stdtr(k,t) = 0.5 * incbet( k/2, 1/2, z ) where z = k/(k + t**2). For t < -2, this is the method of computation. For higher t, a direct method is derived from integration by parts. Since the function is symmetric about t=0, the area under the right tail of the density is found by calling the function with -t instead of t. ACCURACY: Tested at random 1 <= k <= 25. The "domain" refers to t. Relative error: arithmetic domain # trials peak rms IEEE -100,-2 50000 5.9e-15 1.4e-15 IEEE -2,100 500000 2.7e-15 4.9e-17 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1995, 2000 by Stephen L. Moshier *************************************************************************/
public static double studenttdistribution(int k, double t)
studentttest1
studentttest2
unequalvariancettest
/************************************************************************* One-sample t-test This test checks three hypotheses about the mean of the given sample. The following tests are performed: * two-tailed test (null hypothesis - the mean is equal to the given value) * left-tailed test (null hypothesis - the mean is greater than or equal to the given value) * right-tailed test (null hypothesis - the mean is less than or equal to the given value). The test is based on the assumption that a given sample has a normal distribution and an unknown dispersion. If the distribution sharply differs from normal, the test will work incorrectly. Input parameters: X - sample. Array whose index goes from 0 to N-1. N - size of sample. Mean - assumed value of the mean. Output parameters: BothTails - p-value for two-tailed test. If BothTails is less than the given significance level the null hypothesis is rejected. LeftTail - p-value for left-tailed test. If LeftTail is less than the given significance level, the null hypothesis is rejected. RightTail - p-value for right-tailed test. If RightTail is less than the given significance level the null hypothesis is rejected. -- ALGLIB -- Copyright 08.09.2006 by Bochkanov Sergey *************************************************************************/
public static void studentttest1( double[] x, int n, double mean, out double bothtails, out double lefttail, out double righttail)
/************************************************************************* Two-sample pooled test This test checks three hypotheses about the mean of the given samples. The following tests are performed: * two-tailed test (null hypothesis - the means are equal) * left-tailed test (null hypothesis - the mean of the first sample is greater than or equal to the mean of the second sample) * right-tailed test (null hypothesis - the mean of the first sample is less than or equal to the mean of the second sample). Test is based on the following assumptions: * given samples have normal distributions * dispersions are equal * samples are independent. Input parameters: X - sample 1. Array whose index goes from 0 to N-1. N - size of sample. Y - sample 2. Array whose index goes from 0 to M-1. M - size of sample. Output parameters: BothTails - p-value for two-tailed test. If BothTails is less than the given significance level the null hypothesis is rejected. LeftTail - p-value for left-tailed test. If LeftTail is less than the given significance level, the null hypothesis is rejected. RightTail - p-value for right-tailed test. If RightTail is less than the given significance level the null hypothesis is rejected. -- ALGLIB -- Copyright 18.09.2006 by Bochkanov Sergey *************************************************************************/
public static void studentttest2( double[] x, int n, double[] y, int m, out double bothtails, out double lefttail, out double righttail)
/************************************************************************* Two-sample unpooled test This test checks three hypotheses about the mean of the given samples. The following tests are performed: * two-tailed test (null hypothesis - the means are equal) * left-tailed test (null hypothesis - the mean of the first sample is greater than or equal to the mean of the second sample) * right-tailed test (null hypothesis - the mean of the first sample is less than or equal to the mean of the second sample). Test is based on the following assumptions: * given samples have normal distributions * samples are independent. Dispersion equality is not required Input parameters: X - sample 1. Array whose index goes from 0 to N-1. N - size of the sample. Y - sample 2. Array whose index goes from 0 to M-1. M - size of the sample. Output parameters: BothTails - p-value for two-tailed test. If BothTails is less than the given significance level the null hypothesis is rejected. LeftTail - p-value for left-tailed test. If LeftTail is less than the given significance level, the null hypothesis is rejected. RightTail - p-value for right-tailed test. If RightTail is less than the given significance level the null hypothesis is rejected. -- ALGLIB -- Copyright 18.09.2006 by Bochkanov Sergey *************************************************************************/
public static void unequalvariancettest( double[] x, int n, double[] y, int m, out double bothtails, out double lefttail, out double righttail)
rmatrixsvd
/************************************************************************* Singular value decomposition of a rectangular matrix. The algorithm calculates the singular value decomposition of a matrix of size MxN: A = U * S * V^T The algorithm finds the singular values and, optionally, matrices U and V^T. The algorithm can find both first min(M,N) columns of matrix U and rows of matrix V^T (singular vectors), and matrices U and V^T wholly (of sizes MxM and NxN respectively). Take into account that the subroutine does not return matrix V but V^T. Input parameters: A - matrix to be decomposed. Array whose indexes range within [0..M-1, 0..N-1]. M - number of rows in matrix A. N - number of columns in matrix A. UNeeded - 0, 1 or 2. See the description of the parameter U. VTNeeded - 0, 1 or 2. See the description of the parameter VT. AdditionalMemory - If the parameter: * equals 0, the algorithm doesn’t use additional memory (lower requirements, lower performance). * equals 1, the algorithm uses additional memory of size min(M,N)*min(M,N) of real numbers. It often speeds up the algorithm. * equals 2, the algorithm uses additional memory of size M*min(M,N) of real numbers. It allows to get a maximum performance. The recommended value of the parameter is 2. Output parameters: W - contains singular values in descending order. U - if UNeeded=0, U isn't changed, the left singular vectors are not calculated. if Uneeded=1, U contains left singular vectors (first min(M,N) columns of matrix U). Array whose indexes range within [0..M-1, 0..Min(M,N)-1]. if UNeeded=2, U contains matrix U wholly. Array whose indexes range within [0..M-1, 0..M-1]. VT - if VTNeeded=0, VT isn’t changed, the right singular vectors are not calculated. if VTNeeded=1, VT contains right singular vectors (first min(M,N) rows of matrix V^T). Array whose indexes range within [0..min(M,N)-1, 0..N-1]. if VTNeeded=2, VT contains matrix V^T wholly. Array whose indexes range within [0..N-1, 0..N-1]. -- ALGLIB -- Copyright 2005 by Bochkanov Sergey *************************************************************************/
public static bool rmatrixsvd( double[,] a, int m, int n, int uneeded, int vtneeded, int additionalmemory, out double[] w, out double[,] u, out double[,] vt)
cmatrixlu
hpdmatrixcholesky
rmatrixlu
spdmatrixcholesky
/************************************************************************* LU decomposition of a general complex matrix with row pivoting A is represented as A = P*L*U, where: * L is lower unitriangular matrix * U is upper triangular matrix * P = P0*P1*...*PK, K=min(M,N)-1, Pi - permutation matrix for I and Pivots[I] This is cache-oblivous implementation of LU decomposition. It is optimized for square matrices. As for rectangular matrices: * best case - M>>N * worst case - N>>M, small M, large N, matrix does not fit in CPU cache INPUT PARAMETERS: A - array[0..M-1, 0..N-1]. M - number of rows in matrix A. N - number of columns in matrix A. OUTPUT PARAMETERS: A - matrices L and U in compact form: * L is stored under main diagonal * U is stored on and above main diagonal Pivots - permutation matrix in compact form. array[0..Min(M-1,N-1)]. -- ALGLIB routine -- 10.01.2010 Bochkanov Sergey *************************************************************************/
public static void cmatrixlu( ref complex[,] a, int m, int n, out int[] pivots)
/************************************************************************* Cache-oblivious Cholesky decomposition The algorithm computes Cholesky decomposition of a Hermitian positive- definite matrix. The result of an algorithm is a representation of A as A=U'*U or A=L*L' (here X' detones conj(X^T)). INPUT PARAMETERS: A - upper or lower triangle of a factorized matrix. array with elements [0..N-1, 0..N-1]. N - size of matrix A. IsUpper - if IsUpper=True, then A contains an upper triangle of a symmetric matrix, otherwise A contains a lower one. OUTPUT PARAMETERS: A - the result of factorization. If IsUpper=True, then the upper triangle contains matrix U, so that A = U'*U, and the elements below the main diagonal are not modified. Similarly, if IsUpper = False. RESULT: If the matrix is positive-definite, the function returns True. Otherwise, the function returns False. Contents of A is not determined in such case. -- ALGLIB routine -- 15.12.2009 Bochkanov Sergey *************************************************************************/
public static bool hpdmatrixcholesky( ref complex[,] a, int n, bool isupper)
/************************************************************************* LU decomposition of a general real matrix with row pivoting A is represented as A = P*L*U, where: * L is lower unitriangular matrix * U is upper triangular matrix * P = P0*P1*...*PK, K=min(M,N)-1, Pi - permutation matrix for I and Pivots[I] This is cache-oblivous implementation of LU decomposition. It is optimized for square matrices. As for rectangular matrices: * best case - M>>N * worst case - N>>M, small M, large N, matrix does not fit in CPU cache INPUT PARAMETERS: A - array[0..M-1, 0..N-1]. M - number of rows in matrix A. N - number of columns in matrix A. OUTPUT PARAMETERS: A - matrices L and U in compact form: * L is stored under main diagonal * U is stored on and above main diagonal Pivots - permutation matrix in compact form. array[0..Min(M-1,N-1)]. -- ALGLIB routine -- 10.01.2010 Bochkanov Sergey *************************************************************************/
public static void rmatrixlu( ref double[,] a, int m, int n, out int[] pivots)
/************************************************************************* Cache-oblivious Cholesky decomposition The algorithm computes Cholesky decomposition of a symmetric positive- definite matrix. The result of an algorithm is a representation of A as A=U^T*U or A=L*L^T INPUT PARAMETERS: A - upper or lower triangle of a factorized matrix. array with elements [0..N-1, 0..N-1]. N - size of matrix A. IsUpper - if IsUpper=True, then A contains an upper triangle of a symmetric matrix, otherwise A contains a lower one. OUTPUT PARAMETERS: A - the result of factorization. If IsUpper=True, then the upper triangle contains matrix U, so that A = U^T*U, and the elements below the main diagonal are not modified. Similarly, if IsUpper = False. RESULT: If the matrix is positive-definite, the function returns True. Otherwise, the function returns False. Contents of A is not determined in such case. -- ALGLIB routine -- 15.12.2009 Bochkanov Sergey *************************************************************************/
public static bool spdmatrixcholesky(ref double[,] a, int n, bool isupper)
hyperbolicsinecosineintegrals
sinecosineintegrals
/************************************************************************* Hyperbolic sine and cosine integrals Approximates the integrals x - | | cosh t - 1 Chi(x) = eul + ln x + | ----------- dt, | | t - 0 x - | | sinh t Shi(x) = | ------ dt | | t - 0 where eul = 0.57721566490153286061 is Euler's constant. The integrals are evaluated by power series for x < 8 and by Chebyshev expansions for x between 8 and 88. For large x, both functions approach exp(x)/2x. Arguments greater than 88 in magnitude return MAXNUM. ACCURACY: Test interval 0 to 88. Relative error: arithmetic function # trials peak rms IEEE Shi 30000 6.9e-16 1.6e-16 Absolute error, except relative when |Chi| > 1: IEEE Chi 30000 8.4e-16 1.4e-16 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 2000 by Stephen L. Moshier *************************************************************************/
public static void hyperbolicsinecosineintegrals( double x, out double shi, out double chi)
/************************************************************************* Sine and cosine integrals Evaluates the integrals x - | cos t - 1 Ci(x) = eul + ln x + | --------- dt, | t - 0 x - | sin t Si(x) = | ----- dt | t - 0 where eul = 0.57721566490153286061 is Euler's constant. The integrals are approximated by rational functions. For x > 8 auxiliary functions f(x) and g(x) are employed such that Ci(x) = f(x) sin(x) - g(x) cos(x) Si(x) = pi/2 - f(x) cos(x) - g(x) sin(x) ACCURACY: Test interval = [0,50]. Absolute error, except relative when > 1: arithmetic function # trials peak rms IEEE Si 30000 4.4e-16 7.3e-17 IEEE Ci 30000 6.9e-16 5.1e-17 Cephes Math Library Release 2.1: January, 1989 Copyright 1984, 1987, 1989 by Stephen L. Moshier *************************************************************************/
public static void sinecosineintegrals( double x, out double si, out double ci)
ftest
onesamplevariancetest
/************************************************************************* Two-sample F-test This test checks three hypotheses about dispersions of the given samples. The following tests are performed: * two-tailed test (null hypothesis - the dispersions are equal) * left-tailed test (null hypothesis - the dispersion of the first sample is greater than or equal to the dispersion of the second sample). * right-tailed test (null hypothesis - the dispersion of the first sample is less than or equal to the dispersion of the second sample) The test is based on the following assumptions: * the given samples have normal distributions * the samples are independent. Input parameters: X - sample 1. Array whose index goes from 0 to N-1. N - sample size. Y - sample 2. Array whose index goes from 0 to M-1. M - sample size. Output parameters: BothTails - p-value for two-tailed test. If BothTails is less than the given significance level the null hypothesis is rejected. LeftTail - p-value for left-tailed test. If LeftTail is less than the given significance level, the null hypothesis is rejected. RightTail - p-value for right-tailed test. If RightTail is less than the given significance level the null hypothesis is rejected. -- ALGLIB -- Copyright 19.09.2006 by Bochkanov Sergey *************************************************************************/
public static void ftest( double[] x, int n, double[] y, int m, out double bothtails, out double lefttail, out double righttail)
/************************************************************************* One-sample chi-square test This test checks three hypotheses about the dispersion of the given sample The following tests are performed: * two-tailed test (null hypothesis - the dispersion equals the given number) * left-tailed test (null hypothesis - the dispersion is greater than or equal to the given number) * right-tailed test (null hypothesis - dispersion is less than or equal to the given number). Test is based on the following assumptions: * the given sample has a normal distribution. Input parameters: X - sample 1. Array whose index goes from 0 to N-1. N - size of the sample. Variance - dispersion value to compare with. Output parameters: BothTails - p-value for two-tailed test. If BothTails is less than the given significance level the null hypothesis is rejected. LeftTail - p-value for left-tailed test. If LeftTail is less than the given significance level, the null hypothesis is rejected. RightTail - p-value for right-tailed test. If RightTail is less than the given significance level the null hypothesis is rejected. -- ALGLIB -- Copyright 19.09.2006 by Bochkanov Sergey *************************************************************************/
public static void onesamplevariancetest( double[] x, int n, double variance, out double bothtails, out double lefttail, out double righttail)
wilcoxonsignedranktest
/************************************************************************* Wilcoxon signed-rank test This test checks three hypotheses about the median of the given sample. The following tests are performed: * two-tailed test (null hypothesis - the median is equal to the given value) * left-tailed test (null hypothesis - the median is greater than or equal to the given value) * right-tailed test (null hypothesis - the median is less than or equal to the given value) Requirements: * the scale of measurement should be ordinal, interval or ratio (i.e. the test could not be applied to nominal variables). * the distribution should be continuous and symmetric relative to its median. * number of distinct values in the X array should be greater than 4 The test is non-parametric and doesn't require distribution X to be normal Input parameters: X - sample. Array whose index goes from 0 to N-1. N - size of the sample. Median - assumed median value. Output parameters: BothTails - p-value for two-tailed test. If BothTails is less than the given significance level the null hypothesis is rejected. LeftTail - p-value for left-tailed test. If LeftTail is less than the given significance level, the null hypothesis is rejected. RightTail - p-value for right-tailed test. If RightTail is less than the given significance level the null hypothesis is rejected. To calculate p-values, special approximation is used. This method lets us calculate p-values with two decimal places in interval [0.0001, 1]. "Two decimal places" does not sound very impressive, but in practice the relative error of less than 1% is enough to make a decision. There is no approximation outside the [0.0001, 1] interval. Therefore, if the significance level outlies this interval, the test returns 0.0001. -- ALGLIB -- Copyright 08.09.2006 by Bochkanov Sergey *************************************************************************/
public static void wilcoxonsignedranktest( double[] x, int n, double e, out double bothtails, out double lefttail, out double righttail)