emacs-elpa-diffs
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[elpa] externals/llm 843cf24aa4 04/13: Added endpoint parameter to docum


From: ELPA Syncer
Subject: [elpa] externals/llm 843cf24aa4 04/13: Added endpoint parameter to documentation.
Date: Wed, 7 Feb 2024 18:58:11 -0500 (EST)

branch: externals/llm
commit 843cf24aa47aaffd9d1116e89d9c2e3ed0291dc1
Author: Thomas E. Allen <thomas@assistivemachines.com>
Commit: Thomas E. Allen <thomas@assistivemachines.com>

    Added endpoint parameter to documentation.
---
 README.org | 1 +
 1 file changed, 1 insertion(+)

diff --git a/README.org b/README.org
index 42aeba8556..2a2659e598 100644
--- a/README.org
+++ b/README.org
@@ -56,6 +56,7 @@ In addition to the provider, which you may want multiple of 
(for example, to cha
 - ~:port~: The port that ollama is run on.  This is optional and will default 
to the default ollama port.
 - ~:chat-model~: The model name to use for chat.  This is not optional for 
chat use, since there is no default.
 - ~:embedding-model~: The model name to use for embeddings.  This is not 
optional for embedding use, since there is no default.
+- ~:endpoint~: The ollama endpoint to use, either "generate" or "chat".  This 
is optional and will default to "generate".
 ** GPT4All
 [[https://gpt4all.io/index.html][GPT4All]] is a way to run large language 
models locally.  To use it with =llm= package, you must click "Enable API 
Server" in the settings.  It does not offer embeddings or streaming 
functionality, though, so Ollama might be a better fit for users who are not 
already set up with local models.  You can set it up with the following 
parameters:
 - ~:host~: The host that GPT4All is run on.  This is optional and will default 
to localhost.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]