help-octave
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: QR vs LU factorisation


From: Fredrik Lingvall
Subject: Re: QR vs LU factorisation
Date: Tue, 01 Jul 2008 22:28:42 +0200
User-agent: Thunderbird 2.0.0.14 (X11/20080505)

Vic Norton wrote:
I just used the "inverse" problem as an svd example, Fredrik. I don't generally compute inverses.

Here is the kind of "noisy" data problem I am thinking of. Suppose you are interested in 50 equity "funds" and you have 50 vectors of 39 successive 13-week returns. Then you can compute 50 expected returns (assuming some kind of weighting, perhaps uniform, of the individual returns) and 50 vectors of deviations from these expected returns. Think of the deviation vectors as the risks of the individual funds. Your problem: express expected return as a linear function of risk.

For this problem you want to solve

   e' * R = E

for the 39 x 1 expected return vector e, where R is the 39 x 50 risk matrix and E is the 1 x 50 matrix of expected returns. None of your data, being ultimately based on dollars and cents prices per share, has even 6 digit accuracy. This is a problem where the svd approach with svdcut = 1e-6 would seem to be a natural choice. But you definitely would NOT compute the "pseudoinverse" of R to solve the problem.

Regards,

Vic

Hi Vic,

If I understand you correctly then E is your (noisy) data, R is known, and e is the parameters you want to find. In that case I would set up the problem something like (I use the convention that all vectors are column vectors):

E = R'*e + n

where n is a "noise" vector. If you have an idea of the precision (variance) of your data, E, then you can assign a zero mean Gaussian distribution for n. You (normally) also have an idea (from previous measurements for example) how much the parameters e may vary. Let us for simplicity say that you know the variance also here so we assign a Gaussian distribution for e as well (i.e., p(e|I) = N(0,Ce) where Ce is the covariance matrix for e). The likelihood of your data will then become,

p(E|e,I) = N(0,Ce) = 1/(2*pi)^N/2 * 1/sqrt(det(Cn)) * exp( -1/2*(E- R'*e)'*inv(Cn)*(E-R'*e) )

where Cn is the covariance matrix for n [Cn = sigma_n^2*eye(50,50) if the errors are the same for all elements in E].
Bayes rule then gives the pdf for e given the data,

p(e|E,I) = p(e|I) * p(E|e,I) / const.        (1)

You can now find the most likely e by taking the log of (1) and set the derivative w.r.t e equal to zero which results in

e_hat = inv( R*inv(Cn)*R' + inv(Ce) )*R*inv(Ce) * E
         = Ce*R*inv(R'*Ce*R+Cn) * E             (2)

If Ce= sigma_e^2*eye(39,39) and Cn = sigma_n^2*eye(50,50) then (2) becomes

e_hat = R*inv(R'*R+sigma_n^2/sigma_e^2*eye(50,50) ) * E

Note that the matrix R'*R+sigma_n^2/sigma_e^2*eye(50,50) always is pos def so the inverse is safe.

Hope this helps.

/Fredrik









--
Fredrik Lingvall, PhD E-mail: address@hidden, address@hidden
Web:    http://folk.uio.no/fl/



reply via email to

[Prev in Thread] Current Thread [Next in Thread]