emacs-elpa-diffs
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[nongnu] elpa/gptel e10c26e97e 1/2: gptel-openai: Handle max_tokens depr


From: ELPA Syncer
Subject: [nongnu] elpa/gptel e10c26e97e 1/2: gptel-openai: Handle max_tokens deprecation conditionally
Date: Tue, 26 Nov 2024 13:00:21 -0500 (EST)

branch: elpa/gptel
commit e10c26e97e20c597d110c40f49c73fb63c057396
Author: Karthik Chikmagalur <karthikchikmagalur@gmail.com>
Commit: Karthik Chikmagalur <karthikchikmagalur@gmail.com>

    gptel-openai: Handle max_tokens deprecation conditionally
    
    * gptel-openai.el:
    (gptel--request-data): The OpenAI API has changed the key for max
    tokens from "max_tokens" to "max_completion_tokens", but the
    former is still required for OpenAI compatible APIs like
    GPT4All (#485).  Fix temporarily by checking `gptel-model': only
    the o1-* models require "max_completion_tokens" to be specified
    for now.
---
 gptel-openai.el | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/gptel-openai.el b/gptel-openai.el
index bd4c729b94..e33f17c3ff 100644
--- a/gptel-openai.el
+++ b/gptel-openai.el
@@ -147,7 +147,11 @@ with differing settings.")
     (when gptel-temperature
       (plist-put prompts-plist :temperature gptel-temperature))
     (when gptel-max-tokens
-      (plist-put prompts-plist :max_completion_tokens gptel-max-tokens))
+      ;; HACK: The OpenAI API has deprecated max_tokens, but we still need it
+      ;; for OpenAI-compatible APIs like GPT4All (#485)
+      (plist-put prompts-plist (if (memq gptel-model '(o1-preview o1-mini))
+                                   :max_completion_tokens :max_tokens)
+                 gptel-max-tokens))
     ;; Merge request params with model and backend params.
     (gptel--merge-plists
      prompts-plist



reply via email to

[Prev in Thread] Current Thread [Next in Thread]