1 - how the feature presents itself to the user and hooks into octave is separate from what happens behind the scenes to generate the words to suggest.
If we implemented a pop-up dialog box where the user could select the intended command, that could be tested without having any neural network or nearest neighbor searching going on in the background. It could be tested with a simple 10 word look up table just to show that the functionality is correct. Later the actual suggestion feature should be some separate, independent function that is separately testable. But having it be independent will involve defining the interface to that piece such that any algorithm should be able to provide word suggestions to it. If a better algorithm comes along later, someone could implement it without having to change the code for actually hooking into octave.
2. The difference between the two might depend a lot on what algorithm you actually use. but I was thinking of it as separating the actual functions that need to be run at compile time versus the function that will be run live when the user mistypes a command. Maybe dictionary generation, neural network training, etc. The live algorithms will need to be more computationally efficient so that they don't impose a delay on the user. Anything done in advance, computational effort is less of a concern