Featured post
statistics - Nonlinear e^(-x) regression using scipy, python, numpy -
the code below giving me flat line line of best fit rather nice curve along model of e^(-x) fit data. can show me how fix code below fits data?
import numpy np import matplotlib.pyplot plt import scipy.optimize def _enegx_(p,x): x0,y0,c,k=p y = (c * np.exp(-k*(x-x0))) + y0 return y def _enegx_residuals(p,x,y): return y - _enegx_(p,x) def get_enegx_coefficients(x,y): print 'x is: ',x print 'y is: ',y # calculate p_guess vectors x,y. note p_guess # starting estimate minimization. p_guess=(np.median(x),np.min(y),np.max(y),.01) # calls leastsq() function, calls residuals function initial # guess parameters , x , y vectors. note residuals # function calls _enegx_ function. return parameters p # minimize least squares error of _enegx_ function respect original # x , y coordinate vectors sent it. p, cov, infodict, mesg, ier = scipy.optimize.leastsq( _enegx_residuals,p_guess,args=(x,y),full_output=1,warning=true) # define optimal values each element of p returned leastsq() function. x0,y0,c,k=p print('''reference data:\ x0 = {x0} y0 = {y0} c = {c} k = {k} '''.format(x0=x0,y0=y0,c=c,k=k)) print 'x.min() is: ',x.min() print 'x.max() is: ',x.max() # create numpy array of x-values numpoints = np.floor((x.max()-x.min())*100) xp = np.linspace(x.min(), x.max(), numpoints) print 'numpoints is: ',numpoints print 'xp is: ',xp print 'p is: ',p pxp=_enegx_(p,xp) print 'pxp is: ',pxp # plot results plt.plot(x, y, '>', xp, pxp, 'g-') plt.xlabel('bpm%rest') plt.ylabel('lvet/bpm',rotation='vertical') plt.xlim(0,3) plt.ylim(0,4) plt.grid(true) plt.show() return p # declare raw data use in creating regression equation x = np.array([1,1.425,1.736,2.178,2.518],dtype='float') y = np.array([3.489,2.256,1.640,1.043,0.853],dtype='float') p=get_enegx_coefficients(x,y)
it looks it's problem initial guesses; (1, 1, 1, 1) works fine:
have
p_guess=(np.median(x),np.min(y),np.max(y),.01)
for function
def _enegx_(p,x): x0,y0,c,k=p y = (c * np.exp(-k*(x-x0))) + y0 return y
so that's test_data_maxe^( -.01(x - test_data_median)) + test_data_min
i don't know art of choosing starting parameters, can few things. leastsq
finding local minimum here - key in choosing these values find right mountain climb, not try cut down on work minimization algorithm has do. initial guess looks (green
): (1.736, 0.85299999999999998, 3.4889999999999999, 0.01)
which results in flat line (blue): (-59.20295956, 1.8562 , 1.03477144, 0.69483784)
greater gains made in adjusting height of line in increasing k value. if know you're fitting kind of data, use larger k. if don't know, guess try find decent k value sampling data, or working slope between average of first half , second half, wouldn't know how go that.
edit: start several guesses, run minimization several times, , take line lowest residuals.
- Get link
- X
- Other Apps
Comments
Post a Comment