[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Constrained non linear regression using ML
From: |
Jaroslav Hajek |
Subject: |
Re: Constrained non linear regression using ML |
Date: |
Wed, 17 Mar 2010 12:59:00 +0100 |
On Wed, Mar 17, 2010 at 12:40 PM, Corrado <address@hidden> wrote:
> Dear Jaroslav, Friedrik, Octave,
>
> some further information:
>
> 1) when you reverse transform, the formula is actually:
>
> 1+k1*p1+ .... + kn*pn = - log(1-y)
>
> I think it is (correct me if I am wrong!).
Yes. Just beware about severe loss of acuracy of log (1-y) when y is
small. That's why I wrote it using log1p.
> That means that if any of the
> y are 1 (and they are!), then the LHS of the equation is Infinite, and
> ruins the entire reasoning.
> How would you deal with that?
Depends. You see that your model y = 1 - exp (k0 + k'*p) can never
actually yield values over 1, so values >= 1 are plain wrong. If there
are just few of them (say <5%), you can just dump them from the
dataset. If there are many (in which case there is probably something
wrong with the measurements, or the values k and p are very high), the
transformation is not feasible, and you need to revert to ML on the
original y. In this case I would not expect an accurate estimate at
all, though, because the noise component is probably quite high.
>
> 2) Jaroslav says: If you don't have a prior estimate of the error
> distribution parameters, you need to estimate them as well. I am not
> clear really about that.
> Could you please explain what you mean?
beta distribution is a parameterized family, so saying "error is
beta-distributed" is not enough for the ML estimate. For some
families, such as gaussian, the ML estimate happens to be independent
of the parameters (sigma) but this is not true in general.
> 3) Finally, if you need to establish an initial condition, that is an
> initial value for the parameters, then I would in any case use NLS (as
> in Assumption 1) to determine them.
You can, except that if Assumption 2) is feasible, it will be much
faster. This assumption reduces the problem to a linear LS problem,
which can be solved very easily end efficiently in Octave:
x = -log1p(-y);
k_est = [ones(1, rows(p)), p] \ x;
or (this will enforce nonnegative k)
k_est = lsqnonneg ([ones(1, rows(p)), p], x);
--
RNDr. Jaroslav Hajek, PhD
computing expert & GNU Octave developer
Aeronautical Research and Test Institute (VZLU)
Prague, Czech Republic
url: www.highegg.matfyz.cz
- Constrained non linear regression using ML, Corrado, 2010/03/16
- Re: Constrained non linear regression using ML, Fredrik Lingvall, 2010/03/17
- Re: Constrained non linear regression using ML, Fredrik Lingvall, 2010/03/17
- Re: Constrained non linear regression using ML, Michael Creel, 2010/03/17
- Re: Constrained non linear regression using ML, Fredrik Lingvall, 2010/03/18
- Re: Constrained non linear regression using ML, Corrado, 2010/03/18
- Re: Constrained non linear regression using ML, Fredrik Lingvall, 2010/03/19
- Re: Constrained non linear regression using ML, Corrado, 2010/03/19
- Re: Constrained non linear regression using ML, Fredrik Lingvall, 2010/03/19
- Re: Constrained non linear regression using ML, Corrado, 2010/03/23
- Re: Constrained non linear regression using ML, Jaroslav Hajek, 2010/03/23
- Re: Constrained non linear regression using ML, Corrado, 2010/03/23