[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[elpa] externals/ellama d8224613ac 1/3: Add interactive provider selecti
From: |
ELPA Syncer |
Subject: |
[elpa] externals/ellama d8224613ac 1/3: Add interactive provider selection |
Date: |
Mon, 18 Dec 2023 06:57:48 -0500 (EST) |
branch: externals/ellama
commit d8224613acf4bfbd5879adcd02f25562fefc13bb
Author: Sergey Kostyaev <kostyaev.sergey2@wb.ru>
Commit: Sergey Kostyaev <kostyaev.sergey2@wb.ru>
Add interactive provider selection
---
README.md | 66 +++++++++++++++++++++++++++++++++++++++++++++------------------
ellama.el | 43 +++++++++++++++++++++++++++++++++++++++--
2 files changed, 88 insertions(+), 21 deletions(-)
diff --git a/README.md b/README.md
index aff2d0f6ce..098394a104 100644
--- a/README.md
+++ b/README.md
@@ -18,10 +18,10 @@ output, making it effortless to use with your preferred
text editor.
`ellama-ask-selection` and `ellama-ask-line`. Some cosmetic changes
done.
- `28.10.2023` - Switched from
-[ollama](https://github.com/jmorganca/ollama)'s API to [llm
-library](https://elpa.gnu.org/packages/llm.html). [Many
-providers](https://github.com/ahyatt/llm#setting-up-providers)
-supported.
+ [ollama](https://github.com/jmorganca/ollama)'s API to
+ [llm library](https://elpa.gnu.org/packages/llm.html).
+ [Many providers](https://github.com/ahyatt/llm#setting-up-providers)
+ supported.
## Installation
@@ -43,7 +43,20 @@ ellama configuration like this:
(require 'llm-ollama)
(setopt ellama-provider
(make-llm-ollama
- :chat-model "zephyr:7b-alpha-q5_K_M" :embedding-model
"zephyr:7b-alpha-q5_K_M")))
+ :chat-model "zephyr:7b-beta-q6_K" :embedding-model
"zephyr:7b-beta-q6_K"))
+ ;; Predefined llm providers for interactive switching.
+ ;; You shouldn't add ollama providers here - it can be selected interactively
+ ;; without it. It is just example.
+ (setopt ellama-providers
+ '(("zephyr" . (make-llm-ollama
+ :chat-model
"zephyr:7b-beta-q6_K"
+ :embedding-model
"zephyr:7b-beta-q6_K"))
+ ("mistral" . (make-llm-ollama
+ :chat-model
"mistral:7b-instruct-v0.2-q6_K"
+ :embedding-model
"mistral:7b-instruct-v0.2-q6_K"))
+ ("mixtral" . (make-llm-ollama
+ :chat-model
"mixtral:8x7b-instruct-v0.1-q3_K_M-4k"
+ :embedding-model
"mixtral:8x7b-instruct-v0.1-q3_K_M-4k")))))
```
## Commands
@@ -97,7 +110,8 @@ Review code in a selected region or the current buffer using
Ellama.
### ellama-change
-Change text in a selected region or the current buffer according to a provided
change.
+Change text in a selected region or the current buffer according to a
+provided change.
### ellama-enhance-grammar-spelling
@@ -111,27 +125,33 @@ Enhance the wording in the currently selected region or
buffer using Ellama.
### ellama-make-concise
-Make the text of the currently selected region or buffer concise and simple
using Ellama.
+Make the text of the currently selected region or buffer concise and
+simple using Ellama.
### ellama-change-code
-Change selected code or code in the current buffer according to a provided
change using Ellama.
+Change selected code or code in the current buffer according to a
+provided change using Ellama.
### ellama-enhance-code
-Change selected code or code in the current buffer according to a provided
change using Ellama.
+Change selected code or code in the current buffer according to a
+provided change using Ellama.
### ellama-complete-code
-Complete selected code or code in the current buffer according to a provided
change using Ellama.
+Complete selected code or code in the current buffer according to a
+provided change using Ellama.
### ellama-add-code
-Add new code according to a description, generating it with a provided context
from the selected region or the current buffer using Ellama.
+Add new code according to a description, generating it with a provided
+context from the selected region or the current buffer using Ellama.
### ellama-render
-Render the currently selected text or the text in the current buffer as a
specified format using Ellama.
+Render the currently selected text or the text in the current buffer
+as a specified format using Ellama.
### ellama-make-list
@@ -145,6 +165,10 @@ Create a markdown table from the active region or the
current buffer using Ellam
Summarize a webpage fetched from a URL using Ellama.
+### ellama-provider-select
+
+Select ellama provider.
+
### ellama-code-complete
Alias to the `ellama-complete-code` function.
@@ -207,6 +231,7 @@ Ellama, using the `C-x e` prefix:
| "t t" | ellama-translate | Text translate |
| "t c" | ellama-complete | Text complete |
| "d w" | ellama-define-word | Define word |
+| "p s" | ellama-provider-select | Provider select |
## Configuration
@@ -218,16 +243,19 @@ The following variables can be customized for the Ellama
client:
- `ellama-user-nick`: The user nick in logs.
- `ellama-assistant-nick`: The assistant nick in logs.
- `ellama-buffer-mode`: The major mode for the Ellama logs buffer.
- Default mode is `markdown-mode`.
+Default mode is `markdown-mode`.
- `ellama-language`: The language for Ollama translation. Default
- language is english.
+language is english.
- `ellama-provider`: llm provider for ellama. Default provider is
- `ollama` with [zephyr](https://ollama.ai/library/zephyr) model.
- There are many supported providers: `ollama`, `open ai`, `vertex`,
- `GPT4All`. For more information see [llm
- documentation](https://elpa.gnu.org/packages/llm.html)
+`ollama` with [zephyr](https://ollama.ai/library/zephyr) model.
+There are many supported providers: `ollama`, `open ai`, `vertex`,
+`GPT4All`. For more information see [llm
+documentation](https://elpa.gnu.org/packages/llm.html)
+- `ellama-providers`: association list of model llm providers with
+ name as key.
- `ellama-spinner-type`: Spinner type for ellama. Default type is
- `progress-bar`.
+`progress-bar`.
+- `ellama-ollama-binary`: Path to ollama binary.
## Acknowledgments
diff --git a/ellama.el b/ellama.el
index 7ac162192b..e676461465 100644
--- a/ellama.el
+++ b/ellama.el
@@ -6,7 +6,7 @@
;; URL: http://github.com/s-kostyaev/ellama
;; Keywords: help local tools
;; Package-Requires: ((emacs "28.1") (llm "0.6.0") (spinner "1.7.4"))
-;; Version: 0.3.2
+;; Version: 0.4.0
;; SPDX-License-Identifier: GPL-3.0-or-later
;; Created: 8th Oct 2023
@@ -77,6 +77,11 @@
:group 'tools
:type '(sexp :validate 'cl-struct-p))
+(defcustom ellama-providers nil
+ "LLM provider list for fast switching."
+ :group 'tools
+ :type '(alist :key-type string))
+
(defcustom ellama-spinner-type 'progress-bar
"Spinner type for ellama."
:group 'tools
@@ -90,6 +95,11 @@
:type 'string
:group 'tools)
+(defcustom ellama-ollama-binary (executable-find "ollama")
+ "Path to ollama binary."
+ :type 'string
+ :group 'tools)
+
(defvar-local ellama--chat-prompt nil)
(defvar-local ellama--change-group nil)
@@ -138,7 +148,9 @@
("t t" ellama-translate "Text translate")
("t c" ellama-complete "Text complete")
;; define
- ("d w" ellama-define-word "Define word"))))
+ ("d w" ellama-define-word "Define word")
+ ;; provider
+ ("p s" ellama-provider-select "Provider select"))))
(dolist (key-command key-commands)
(define-key ellama-keymap (kbd (car key-command)) (cadr key-command)))))
@@ -568,5 +580,32 @@ buffer."
(kill-region (point) (point-max))
(ellama-summarize))))
+(defun ellama-get-ollama-local-model ()
+ "Return llm provider for interactively selected ollama model."
+ (interactive)
+ (let ((model-name
+ (completing-read "Select ollama model: "
+ (mapcar (lambda (s)
+ (car (split-string s)))
+ (seq-drop
+ (process-lines ellama-ollama-binary "ls")
1)))))
+ (make-llm-ollama
+ :chat-model model-name :embedding-model model-name)))
+
+;;;###autoload
+(defun ellama-provider-select ()
+ "Select ellama provider."
+ (interactive)
+ (let* ((providers (if (and ellama-ollama-binary
+ (file-exists-p ellama-ollama-binary))
+ (push '("ollama model" .
(ellama-get-ollama-local-model))
+ ellama-providers)
+ ellama-providers))
+ (variants (mapcar #'car providers)))
+ (setq ellama-provider
+ (eval (alist-get
+ (completing-read "Select model: " variants)
+ providers nil nil #'string=)))))
+
(provide 'ellama)
;;; ellama.el ends here.