emacs-tangents
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [emacs-tangents] Including AI into Emacs


From: Christopher Howard
Subject: Re: [emacs-tangents] Including AI into Emacs
Date: Tue, 10 Dec 2024 09:14:48 -0900

Jean Louis <bugs@gnu.support> writes:

> Integration, if that is the right work, is enhancing the human workflow to 
> minimize efforts and provide optimum results. That is what I mean.

That is not integration, that is optimization or efficiency. Integration may 
lead to better optimization or efficiency but it might have the opposite effect.

>
> Programmers are not necessarily scientists, and so they think in terms
> of typing. But it is possible to control light with brainwaves, with
> special hat, or typing on computer with the eyeball movements.
>

None of those interface have any appeal to me at all. Well, okay, controlling 
light with brainwaves sounds interesting, at least. But even so I don't see how 
the input interface has anything to do with whether or not LLMs (or other AI 
approaches) should be integrated it our workflow. Unless an input interface is 
so compute intensive that it requires some kind of cluster-based neural network 
just to work at all.

> Makers of LLMs now provided "trained" models  that
> can type text, translate text more accurately then common translators. 
>

This sounds like an argument for using LLMs to do language translation, which I 
suppose must be acknowledged. Regarding prose: I've read the mind-numbing, 
generic prose output on the Internet that is now being spit out by LLMs, and I 
hope that goes away. The artwork generated is also terrible, which has been 
showing up on some of the cheap furnishing products we buy from China.

>> For activity (3), even I can do it without the help of remote
>> compute cluster, it is going to require a large model database, plus
>> intense computing resources, like a separate computer, or an expensive
>> GPU requiring proprietary drivers.
>
> Here is example that works without GPU:
> https://github.com/Mozilla-Ocho/llamafile/
>
> and other examples on same page. 
>

I don't see how a llama driven chat interface or an image generator is going to 
be useful to me, or worth the computing costs. But I suppose if something like 
that could be specialized to have expert knowledge of the libraries on my 
computer or my work flow, it might be worth playing around with.

> Just as usual, you have got the computing cost, electricity and
> computer wearing cost.

My understanding was, for LLMs, the difference involves orders of magnitude. 
That is what I hear others saying, at least.

Regarding inference engines, I recall with Prolog there is a lot of 
backtracking going on, so the essence of figuring out a workably efficient 
program was (1) coming up with intelligent rules, and (2) figuring out when to 
cut off the backtracking. I have a old Prolog book on my book shelf, but I 
haven't played around with Prolog at all for years

-- 
Christopher Howard



reply via email to

[Prev in Thread] Current Thread [Next in Thread]