emacs-elpa-diffs
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[elpa] externals/llm 6aaf9ea4ed 1/2: Add option to set scheme (http/http


From: ELPA Syncer
Subject: [elpa] externals/llm 6aaf9ea4ed 1/2: Add option to set scheme (http/https) for llm-ollama
Date: Sun, 5 Nov 2023 00:58:07 -0400 (EDT)

branch: externals/llm
commit 6aaf9ea4ed2c5b3f2fae830e220c07720fbc6f95
Author: Andrew Hyatt <ahyatt@gmail.com>
Commit: Andrew Hyatt <ahyatt@gmail.com>

    Add option to set scheme (http/https) for llm-ollama
    
    The configuration for llm-ollama can now be customized with a scheme
    setting, allowing users to choose between http and https.
    
    This fixes https://github.com/ahyatt/llm/issues/7.
---
 NEWS.org      | 1 +
 llm-ollama.el | 9 ++++++---
 2 files changed, 7 insertions(+), 3 deletions(-)

diff --git a/NEWS.org b/NEWS.org
index 05e47c8300..29dc63cbbc 100644
--- a/NEWS.org
+++ b/NEWS.org
@@ -1,6 +1,7 @@
 * Version 0.5.2
 - Fix incompatibility with older Emacs introduced in Version 0.5.1.
 - Add support for Google Cloud Vertex model =text-bison= and variants.
+- =llm-ollama= can now be configured with a scheme (http vs https).
 * Version 0.5.1
 - Implement token counting for Google Cloud Vertex via their API.
 - Fix issue with Google Cloud Vertex erroring on multibyte strings.
diff --git a/llm-ollama.el b/llm-ollama.el
index ca5f1e91c3..0d754a3bd9 100644
--- a/llm-ollama.el
+++ b/llm-ollama.el
@@ -47,6 +47,9 @@
 (cl-defstruct llm-ollama
   "A structure for holding information needed by Ollama's API.
 
+SCHEME is the http scheme to use, a string. It is optional and
+default to `http'.
+
 HOST is the host that Ollama is running on. It is optional and
 default to localhost.
 
@@ -55,7 +58,7 @@ PORT is the localhost port that Ollama is running on.  It is 
optional.
 CHAT-MODEL is the model to use for chat queries. It is required.
 
 EMBEDDING-MODEL is the model to use for embeddings.  It is required."
-  host port chat-model embedding-model)
+  (scheme "http") (host "localhost") (port 11434) chat-model embedding-model)
 
 ;; Ollama's models may or may not be free, we have no way of knowing. There's 
no
 ;; way to tell, and no ToS to point out here.
@@ -65,8 +68,8 @@ EMBEDDING-MODEL is the model to use for embeddings.  It is 
required."
 
 (defun llm-ollama--url (provider method)
   "With ollama PROVIDER, return url for METHOD."
-  (format "http://%s:%d/api/%s"; (or (llm-ollama-host provider) "localhost")
-          (or (llm-ollama-port provider) 11434) method))
+  (format "%s://%s:%d/api/%s" (llm-ollama-scheme provider )(llm-ollama-host 
provider)
+          (llm-ollama-port provider) method))
 
 (defun llm-ollama--embedding-request (provider string)
   "Return the request to the server for the embedding of STRING.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]