Hi Francesco,
I agree with you in the use of the Python API, it is my preferable interface to Tensorflow. We could aim to work on Pytave so we could call a Python script in charge of executing the Tensorflow code. However, I do also agree with Rik: this would be a long path that would abstract us from dealing directly with Tensorflow from Octave. The Python script would be in
As for parallelization, do you mean by parallelization options adding the possibility to chose to run the code on GPU or CPU? If we plan to use Tensorflow, that will only depend on the distribution installed since they are different if you plan to run Tensorflow over GPU or CPU [1]. In running time the distribution would be detected and the workload would be share among the detected devices depending on such installation.