bug-apl
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Bug-apl] Use with word2vec


From: Juergen Sauermann
Subject: Re: [Bug-apl] Use with word2vec
Date: Sat, 29 Apr 2017 13:04:03 +0200
User-agent: Mozilla/5.0 (X11; Linux i686; rv:45.0) Gecko/20100101 Thunderbird/45.2.0

Hi Fred,

I have not fully understood what you want to do exactly, but is looks to me as if you want to go for
native GNU APL functions. Native functions provide the means to bypass the GNU APL interpreter
itself to the extent desired. For example you can use APL variables but not the APL parser, or the
APL parser but not the implementation of primitives, or whatever else you are up to.

As to plain double vectors, it is very difficult to introduce them as a new built-in data type because that
change would affect: every APL primitive, every APL operator, )LOAD, )SAVE, )DUMP, and a lot
more.

However, you can have a look at (the top-level of) the implementation of the matrix divide primitive which
is doing what you are maybe after. The implementation of matrix divide expects either a double vector or
a complex<double> vector as argument(s) and returns such a vector as result. Before and after the computation
of matrix divide a conversion between APL values and the plain double or complex vector is performed.
This conversion is very lightweight. If you have a homogenious GNU APL value, say all revel items being double,
then that value is almost like a C double *. The difference is a space between adjacent ravel elements. In other
words (expressed in APL):

C_vector ←→ 1 0 1 0 ... / APL_vector

I can provide you with more information if you want to go along this path.

/// Jürgen




On 04/29/2017 03:19 AM, Fred Weigel wrote:
Jeurgen, and other GNU APL experts.

I am exploring neural nets, word2vec and some other AI related areas.

Right now, I want to tie in google's word2vec trained models (the
billion word one GoogleNews-vectors-negative300.bin.gz)

This is a binary file containing a lot of floating point data -- about
3.5GB of data. These are words, followed by cosine distances. I could
attempt to feed this in slow way, and put it into an APL workspace. 
But... I also intend on attempting to feed the data to a GPU. So, what I
am looking for is a modification to GNU APL (and yes, I am willing to do
the work) -- to allow for the complete suppression of normal C++
allocations, etc. and allow the introduction of simple float/double
vectors or matrices (helpful to allow "C"-ish or UTF-8-ish strings: the
data is (C string containing word name) (fixed number of floating
point)... repeated LOTs of times.

The data set(s) may be compressed, so I don't want read them directly --
possibly from a shared memory region (64 bit system only, of course), or
, perhaps using shared variables... but I don't think that would be fast
enough.

Anyway, this begins to allow the push into "big data" and AI
applications. Just looking for some input and ideas here.

Many thanks
Fred Weigel




reply via email to

[Prev in Thread] Current Thread [Next in Thread]