emacs-elpa-diffs
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[elpa] externals/llm e19a678e84 2/2: Fix breakage with Open AI's llm-cha


From: ELPA Syncer
Subject: [elpa] externals/llm e19a678e84 2/2: Fix breakage with Open AI's llm-chat-token-limit (#77)
Date: Tue, 3 Sep 2024 03:58:40 -0400 (EDT)

branch: externals/llm
commit e19a678e84bf8168cac48662063bc44ca783cfad
Author: Andrew Hyatt <ahyatt@gmail.com>
Commit: GitHub <noreply@github.com>

    Fix breakage with Open AI's llm-chat-token-limit (#77)
    
    Also fix unused variable in llm-tester.
    
    Add byte-compilation to the CI to catch issues like this in the future.
---
 .github/workflows/ci.yaml | 4 ++++
 NEWS.org                  | 2 ++
 llm-openai.el             | 4 ++--
 llm-tester.el             | 3 +--
 4 files changed, 9 insertions(+), 4 deletions(-)

diff --git a/.github/workflows/ci.yaml b/.github/workflows/ci.yaml
index 5fae703f94..b9a8ee8a45 100644
--- a/.github/workflows/ci.yaml
+++ b/.github/workflows/ci.yaml
@@ -31,6 +31,10 @@ jobs:
     - name: Check out the source code
       uses: actions/checkout@v4
 
+    - name: Byte-compile the project
+      run: |
+        eldev -dtT compile --warnings-as-errors
+
     - name: Lint the project
       run: |
         eldev -p -dtT lint
diff --git a/NEWS.org b/NEWS.org
index 633ae5b266..42955c58b4 100644
--- a/NEWS.org
+++ b/NEWS.org
@@ -1,3 +1,5 @@
+* Version 0.17.4
+- Fix problem with Open AI's =llm-chat-token-limit=.
 * Version 0.17.3
 - More fixes with Claude and Ollama function calling conversation, thanks to 
Paul Nelson.
 - Make =llm-chat-streaming-to-point= more efficient, just inserting new text, 
thanks to Paul Nelson.
diff --git a/llm-openai.el b/llm-openai.el
index 4c94189ce9..5e50d7896e 100644
--- a/llm-openai.el
+++ b/llm-openai.el
@@ -270,8 +270,8 @@ RESPONSE can be nil if the response is complete."
       4096)
      (t 4096))))
 
-(cl-defmethod llm-chat-token-limit ((_ llm-openai-compatible))
-  (llm-provider-utils-model-token-limit (llm-ollama-chat-model provider)))
+(cl-defmethod llm-chat-token-limit ((provider llm-openai-compatible))
+  (llm-provider-utils-model-token-limit (llm-openai-chat-model provider)))
 
 (cl-defmethod llm-capabilities ((_ llm-openai))
   (list 'streaming 'embeddings 'function-calls))
diff --git a/llm-tester.el b/llm-tester.el
index e524cb36f8..6ec5cfa2e0 100644
--- a/llm-tester.el
+++ b/llm-tester.el
@@ -262,8 +262,7 @@ of by calling the `describe_function' function."
 
 (defun llm-tester-function-calling-sync (provider)
   "Test that PROVIDER can call functions."
-  (let ((prompt (llm-tester-create-test-function-prompt))
-        (result (llm-chat provider (llm-tester-create-test-function-prompt))))
+  (let ((result (llm-chat provider (llm-tester-create-test-function-prompt))))
     (cond ((stringp result)
            (llm-tester-log
             "ERROR: Provider %s returned a string instead of a function result"



reply via email to

[Prev in Thread] Current Thread [Next in Thread]