emacs-elpa-diffs
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[elpa] externals/llm ffbaa5e683 40/71: Restore error callbacks


From: ELPA Syncer
Subject: [elpa] externals/llm ffbaa5e683 40/71: Restore error callbacks
Date: Fri, 17 May 2024 00:58:47 -0400 (EDT)

branch: externals/llm
commit ffbaa5e68366f0a2bb8cd2586cf2c0f9a869b0fb
Author: Roman Scherer <roman@burningswell.com>
Commit: Roman Scherer <roman@burningswell.com>

    Restore error callbacks
---
 llm-gemini.el | 5 ++++-
 llm-vertex.el | 5 ++++-
 2 files changed, 8 insertions(+), 2 deletions(-)

diff --git a/llm-gemini.el b/llm-gemini.el
index 232cfeac86..436d4454b9 100644
--- a/llm-gemini.el
+++ b/llm-gemini.el
@@ -147,7 +147,10 @@ If STREAMING-P is non-nil, use the streaming endpoint."
                      provider prompt (or function-call
                                          (if (> (length streamed-text) 0)
                                              streamed-text
-                                           (llm-vertex--get-chat-response 
data)))))))))
+                                           (llm-vertex--get-chat-response 
data))))))
+     :on-error (lambda (_ data)
+                 (llm-request-callback-in-buffer buf error-callback 'error
+                                                 (llm-vertex--error-message 
data))))))
 
 (defun llm-gemini--count-token-url (provider)
   "Return the URL for the count token call, using PROVIDER."
diff --git a/llm-vertex.el b/llm-vertex.el
index 890c998f19..dbe33a9c66 100644
--- a/llm-vertex.el
+++ b/llm-vertex.el
@@ -338,7 +338,10 @@ If STREAMING is non-nil, use the URL for the streaming 
API."
                      provider prompt (or function-call
                                          (if (> (length streamed-text) 0)
                                              streamed-text
-                                           (llm-vertex--get-chat-response 
data)))))))))
+                                           (llm-vertex--get-chat-response 
data))))))
+     :on-error (lambda (_ data)
+                 (llm-request-callback-in-buffer buf error-callback 'error
+                                                 (llm-vertex--error-message 
data))))))
 
 ;; Token counts
 ;; https://cloud.google.com/vertex-ai/docs/generative-ai/get-token-count



reply via email to

[Prev in Thread] Current Thread [Next in Thread]