|
| From: | Andrew Hyatt |
| Subject: | Re: LLM Experiments, Part 1: Corrections |
| Date: | Mon, 22 Jan 2024 23:49:06 -0400 |
Andrew Hyatt wrote:
> [...] 1. From using gptel and ellama against the same
> model, I see different style responses, and that
> kind of inconsistency would be good to get a handle on;
> LLMs are difficult enough to figure out re what they're
> doing without this additional variation.
>
> Is this keeping the prompt and temperature constant? There's
> inconsistency, though, even keeping everything constant due to
> the randomness of the LLM. I often get very different
> results, for example, to make the demo I shared, I had to run
> it like 5 times because it would either do things too well (no
> need to demo corrections), or not well enough (for example, it
> wouldn't follow my orders to put everything in one paragraph).
>
> 2. Package LLM has the laudible goal of bridgeing between
> models and front-ends, and this is going to be
> vital. 3. (1,2) above lead to the following question:
> 4. Can we write down a list of common configuration
> vars --- here common across the model axis. Make it
> a union of all such params. [...]
Uhm, pardon me for asking but why are the e-mails looking
like this?
--
underground experts united
https://dataswamp.org/~incal
| [Prev in Thread] | Current Thread | [Next in Thread] |