Having had a chance to review the code a bit...
Indeed I meant these one, but also the other alike ones:
int gsl_vector_long_scale (gsl_vector_long * a, const double x);
int gsl_vector_long_add_constant (gsl_vector_long * a, const double x);
and so forth.
I can see arguments for why gsl_vector_long_scale should take a
double. Say I want to scale a long by a non-integer factor and coerce
the result back to long. Multiplying every long by, say, 1.5 is
reasonable and would give a dramatically different effect versus
multiplying by ((long) 1.5). Therefore I don't consider "const double
x" arguments to be a bug in these routines:
int gsl_vector_char_scale (gsl_vector_char * a, const double x);
int gsl_vector_complex_float_scale (gsl_vector_complex_float * a,
const gsl_complex_float x);
int gsl_vector_complex_long_double_scale
(gsl_vector_complex_long_double * a, const gsl_complex_long_double x);
int gsl_vector_complex_scale (gsl_vector_complex * a, const gsl_complex x);
int gsl_vector_int_scale (gsl_vector_int * a, const double x);
int gsl_vector_long_scale (gsl_vector_long * a, const double x);
int gsl_vector_scale (gsl_vector * a, const double x);
int gsl_vector_short_scale (gsl_vector_short * a, const double x);
int gsl_vector_uchar_scale (gsl_vector_uchar * a, const double x);
int gsl_vector_uint_scale (gsl_vector_uint * a, const double x);
int gsl_vector_ulong_scale (gsl_vector_ulong * a, const double x);
int gsl_vector_ushort_scale (gsl_vector_ushort * a, const double x);
The only problem appears to be in the following two signatures:
int gsl_vector_float_scale (gsl_vector_float * a, const double x);
int gsl_vector_long_double_scale (gsl_vector_long_double * a, const double x);
The former should probably take "const float x" but no real harm is
caused by taking a float (other than a small performance hit for
double FLOPs where float FLOPs would suffice). The latter certainly
would cause precision loss. I am inclined to change only these two
signatures as far as scaling vectors are concerned.
In contrast, gsl_vector_long_add_constant feels like it should take a
long as adding 1.5 or ((long) 1.5) to a long gives the same result.
In cases where double precision values can represent integers exactly
(a surprisingly large region on the whole number line), there's no
precision-related harm in the current API. It's just inefficient and
confusing. In cases where double precision is insufficient, precision
loss can occur. I believe these routines fall into one camp or the
other and should be change:
int gsl_vector_char_add_constant (gsl_vector_char * a, const double x);
int gsl_vector_float_add_constant (gsl_vector_float * a, const double x);
int gsl_vector_int_add_constant (gsl_vector_int * a, const double x);
int gsl_vector_long_add_constant (gsl_vector_long * a, const double x);
int gsl_vector_long_double_add_constant (gsl_vector_long_double * a,
const double x);
int gsl_vector_short_add_constant (gsl_vector_short * a, const double x);
int gsl_vector_uchar_add_constant (gsl_vector_uchar * a, const double x);
int gsl_vector_uint_add_constant (gsl_vector_uint * a, const double x);
int gsl_vector_ulong_add_constant (gsl_vector_ulong * a, const double x);
int gsl_vector_ushort_add_constant (gsl_vector_ushort * a, const double x);
Thoughts?