Pearson’s r (correlation coefficient).
Pearson(x,y) –> correlation coefficient
x and y are arrays of same length.
Fit a straight line y = a + bx to the data in x and y.
Errors on y should be provided in dy in order to assess the goodness of the fit and derive errors on the parameters.
linfit(x,y[,dy]) –> result_dict
Fit y = a + bx to the data in x and y by analytically minimizing chi^2. dy holds the standard deviations of the individual y_i. If dy is not given, they are assumed to be constant (note that in this case Q is set to 1 and it is meaningless and chi2 is normalised to unit standard deviation on all points!).
Returns the parameters a and b, their uncertainties sigma_a and sigma_b, and their correlation coefficient r_ab; it also returns the chi-squared statistic and the goodness-of-fit probability Q (that the fit would have chi^2 this large or larger; Q < 10^-2 indicates that the model is bad — Q is the probability that a value of chi-square as _poor_ as the calculated statistic chi2 should occur by chance.)
Returns : | result_dict with components
|
---|
Based on ‘Numerical Recipes in C’, Ch 15.2.
Fit a function f to data (x,y) using the method of least squares.
The function is fitted when the object is created, using scipy.optimize.leastsq(). One must derive from the base class FitFunc and override the FitFunc.f_factory() (including the definition of an appropriate local fitfunc() function) and FitFunc.initial_values() appropriately. See the examples for a linear fit FitLin, a 1-parameter exponential fit FitExp, or a 3-parameter double exponential fit FitExp2.
After a successful fit, the fitted function can be applied to any data (a 1D-numpy array) with FitFunc.fit().
Stub for fit function factory, which returns the fit function. Override for derived classes.
Applies the fit to all x values
List of initital guesses for all parameters p[]
y = f(x) = p[0]*x + p[1]
y = f(x) = exp(-p[0]*x)
y = f(x) = p[0]*exp(-p[1]*x) + (1-p[0])*exp(-p[2]*x)