[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[elpa] externals/llm 843cf24aa4 04/13: Added endpoint parameter to docum
From: |
ELPA Syncer |
Subject: |
[elpa] externals/llm 843cf24aa4 04/13: Added endpoint parameter to documentation. |
Date: |
Wed, 7 Feb 2024 18:58:11 -0500 (EST) |
branch: externals/llm
commit 843cf24aa47aaffd9d1116e89d9c2e3ed0291dc1
Author: Thomas E. Allen <thomas@assistivemachines.com>
Commit: Thomas E. Allen <thomas@assistivemachines.com>
Added endpoint parameter to documentation.
---
README.org | 1 +
1 file changed, 1 insertion(+)
diff --git a/README.org b/README.org
index 42aeba8556..2a2659e598 100644
--- a/README.org
+++ b/README.org
@@ -56,6 +56,7 @@ In addition to the provider, which you may want multiple of
(for example, to cha
- ~:port~: The port that ollama is run on. This is optional and will default
to the default ollama port.
- ~:chat-model~: The model name to use for chat. This is not optional for
chat use, since there is no default.
- ~:embedding-model~: The model name to use for embeddings. This is not
optional for embedding use, since there is no default.
+- ~:endpoint~: The ollama endpoint to use, either "generate" or "chat". This
is optional and will default to "generate".
** GPT4All
[[https://gpt4all.io/index.html][GPT4All]] is a way to run large language
models locally. To use it with =llm= package, you must click "Enable API
Server" in the settings. It does not offer embeddings or streaming
functionality, though, so Ollama might be a better fit for users who are not
already set up with local models. You can set it up with the following
parameters:
- ~:host~: The host that GPT4All is run on. This is optional and will default
to localhost.
- [elpa] externals/llm updated (bed2cb774d -> a343797144), ELPA Syncer, 2024/02/07
- [elpa] externals/llm 3147810ec4 03/13: Minor changes to new function comments., ELPA Syncer, 2024/02/07
- [elpa] externals/llm a4d7098c44 01/13: Added support for Ollama /api/chat endpoint, ELPA Syncer, 2024/02/07
- [elpa] externals/llm 1c3727ce50 05/13: Restored comment that I had accidentally dropped from the generate endpoint helper., ELPA Syncer, 2024/02/07
- [elpa] externals/llm 843cf24aa4 04/13: Added endpoint parameter to documentation.,
ELPA Syncer <=
- [elpa] externals/llm ea72852375 09/13: Merge remote-tracking branch 'upstream/main' into ollama-chat-endpoint-support, ELPA Syncer, 2024/02/07
- [elpa] externals/llm ea2ec282aa 10/13: Removed /generate endpoint support based on PR feedback, ELPA Syncer, 2024/02/07
- [elpa] externals/llm a343797144 13/13: Merge pull request #16 from tquartus/ollama-chat-endpoint-support, ELPA Syncer, 2024/02/07
- [elpa] externals/llm b9fc46f333 08/13: Resolved merge conflicts and merged upstream/main into ollama-chat-endpoint-support., ELPA Syncer, 2024/02/07
- [elpa] externals/llm 1e08b7381d 07/13: Merge branch 'ahyatt:main' into ollama-chat-endpoint-support, ELPA Syncer, 2024/02/07
- [elpa] externals/llm 61db5c3cf8 02/13: Corrected form of comments of helper functions., ELPA Syncer, 2024/02/07
- [elpa] externals/llm a1b17b0170 06/13: Remove unneeded space at end of line., ELPA Syncer, 2024/02/07
- [elpa] externals/llm 9e7344ac27 11/13: Minor clean up, remove mention of :endpoint slot in README., ELPA Syncer, 2024/02/07
- [elpa] externals/llm 993081f072 12/13: Minor changes, ELPA Syncer, 2024/02/07