Hi,
Having found out that the avr-gcc lib supports floats and handles them
quite gracefully, I tried porting my linear regression code from
desktop on to the atmega 128L, more or less as is. For testing i used
the same table of X and Y values as sample inputs to the algorithm.
Now I am using microsecond granularity as input to the regression
engine and since the values are fairly large - and since there are sum
of products- I get fairly large numbers (If i do the same on my
desktop, the biggest number i get is sumXSq=7260107358192.000000). Now
I know that doubles are 4 bytes - and i was thinking that would not be
enough - Any way around this?.
Btw when i do print the same result over the serial port (from my
embedded port) I get something like sumXSq=7260108000000.000000 - same
but with 6 of the last significant decimal points lost after
rounding). Any suggestions? Please note that currently optimization
isnt an issue, I just want to get it to run and even if it monstrously
slow, I just have to do this once every boot up!