emacs-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: LLM Experiments, Part 1: Corrections


From: Andrew Hyatt
Subject: Re: LLM Experiments, Part 1: Corrections
Date: Mon, 22 Jan 2024 23:49:06 -0400

Thanks for pointing this out - I was using gnus to respond to email, it looks like it messed things up for reasons probably having to do with quoting.  I don't think I've configured anything strange here, but who knows.  For now, I'll just use gmail to respond.

On Mon, Jan 22, 2024 at 11:11 PM Emanuel Berg <incal@dataswamp.org> wrote:
Andrew Hyatt wrote:

> [...]  1.     From using gptel and ellama against the same
> model, I see     different        style responses, and that
> kind of inconsistency would be        good to get a handle on;
> LLMs are difficult enough to        figure out re what they're
> doing without this additional        variation.
>
> Is this keeping the prompt and temperature constant?  There's
> inconsistency, though, even keeping everything constant due to
> the randomness of the LLM.  I often get very different
> results, for example, to make the demo I shared, I had to run
> it like 5 times because it would either do things too well (no
> need to demo corrections), or not well enough (for example, it
> wouldn't follow my orders to put everything in one paragraph).
>
>    2. Package LLM has the laudible goal of bridgeing between
>    models and        front-ends, and this is going to be
>    vital.     3. (1,2) above lead  to the following question:
>    4. Can we     write down  a list of common configuration
>    vars --- here       common across the model axis. Make  it
>    a union of all such       params. [...]

Uhm, pardon me for asking but why are the e-mails looking
like this?

--
underground experts united
https://dataswamp.org/~incal



reply via email to

[Prev in Thread] Current Thread [Next in Thread]