[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[elpa] externals/ellama ede32c5fb7 3/7: Add comment about user configura
|
From: |
ELPA Syncer |
|
Subject: |
[elpa] externals/ellama ede32c5fb7 3/7: Add comment about user configuration |
|
Date: |
Mon, 22 Jan 2024 15:57:50 -0500 (EST) |
branch: externals/ellama
commit ede32c5fb78f58a6bc7a225a4f016235aa5363b7
Author: Sergey Kostyaev <sskostyaev@gmail.com>
Commit: Sergey Kostyaev <sskostyaev@gmail.com>
Add comment about user configuration
---
README.org | 26 ++++++++++++++------------
1 file changed, 14 insertions(+), 12 deletions(-)
diff --git a/README.org b/README.org
index ab2df16cfc..55fecdced2 100644
--- a/README.org
+++ b/README.org
@@ -26,22 +26,24 @@ In that case you should customize ellama configuration like
this:
(setopt ellama-language "German")
(require 'llm-ollama)
(setopt ellama-provider
- (make-llm-ollama
- :chat-model "mistral:7b-instruct-v0.2-q6/K"
- :embedding-model "mistral:7b-instruct-v0.2-q6/K"))
+ (make-llm-ollama
+ ;; this model should be pulled to use it
+ ;; value should be the same as you print in terminal
during pull
+ :chat-model "mistral:7b-instruct-v0.2-q6/K"
+ :embedding-model "mistral:7b-instruct-v0.2-q6/K"))
;; Predefined llm providers for interactive switching.
;; You shouldn't add ollama providers here - it can be selected
interactively
;; without it. It is just example.
(setopt ellama-providers
- '(("zephyr" . (make-llm-ollama
- :chat-model "zephyr:7b-beta-q6_K"
- :embedding-model "zephyr:7b-beta-q6_K"))
- ("mistral" . (make-llm-ollama
- :chat-model "mistral:7b-instruct-v0.2-q6_K"
- :embedding-model
"mistral:7b-instruct-v0.2-q6_K"))
- ("mixtral" . (make-llm-ollama
- :chat-model
"mixtral:8x7b-instruct-v0.1-q3/K/M-4k"
- :embedding-model
"mixtral:8x7b-instruct-v0.1-q3/K/M-4k")))))
+ '(("zephyr" . (make-llm-ollama
+ :chat-model "zephyr:7b-beta-q6_K"
+ :embedding-model "zephyr:7b-beta-q6_K"))
+ ("mistral" . (make-llm-ollama
+ :chat-model "mistral:7b-instruct-v0.2-q6_K"
+ :embedding-model
"mistral:7b-instruct-v0.2-q6_K"))
+ ("mixtral" . (make-llm-ollama
+ :chat-model
"mixtral:8x7b-instruct-v0.1-q3/K/M-4k"
+ :embedding-model
"mixtral:8x7b-instruct-v0.1-q3/K/M-4k")))))
#+END_SRC
** Commands
- [elpa] externals/ellama updated (4848d51d6c -> 73512d3742), ELPA Syncer, 2024/01/22
- [elpa] externals/ellama ad73c800f2 4/7: Add more comments to user configuration, ELPA Syncer, 2024/01/22
- [elpa] externals/ellama ede32c5fb7 3/7: Add comment about user configuration,
ELPA Syncer <=
- [elpa] externals/ellama 92aadcf054 1/7: Fix quoted html tags, ELPA Syncer, 2024/01/22
- [elpa] externals/ellama b2c3be166a 5/7: Fix typos in readme, ELPA Syncer, 2024/01/22
- [elpa] externals/ellama 96d3578682 2/7: Fix inline images, ELPA Syncer, 2024/01/22
- [elpa] externals/ellama e1a1cace22 6/7: Add more comments for user configuration, ELPA Syncer, 2024/01/22
- [elpa] externals/ellama 73512d3742 7/7: Add ollama pull command to readme, ELPA Syncer, 2024/01/22