emacs-elpa-diffs
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[elpa] externals/llm ecc1f5ff3e: Fix temperature calculation for llm-ope


From: ELPA Syncer
Subject: [elpa] externals/llm ecc1f5ff3e: Fix temperature calculation for llm-openai (#61)
Date: Wed, 14 Aug 2024 00:58:27 -0400 (EDT)

branch: externals/llm
commit ecc1f5ff3e20faa42cc56f784a5f0cbb106970b1
Author: Paul Nelson <63298781+ultronozm@users.noreply.github.com>
Commit: GitHub <noreply@github.com>

    Fix temperature calculation for llm-openai (#61)
    
    * llm-openai.el (llm-provider-chat-request): Fix temperature
    calculation.
    
    For the llm package, temperatures range from 0 to 1.  For OpenAI, they
    range from 0 to 2.  For this reason, we should multiply rather than
    divide by 2 to translate from llm temperatures to OpenAI temperatures.
---
 llm-openai.el | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/llm-openai.el b/llm-openai.el
index e650f12474..be0068769a 100644
--- a/llm-openai.el
+++ b/llm-openai.el
@@ -162,7 +162,7 @@ STREAMING if non-nil, turn on response streaming."
     (push `("model" . ,(or (llm-openai-chat-model provider)
                            "gpt-3.5-turbo-0613")) request-alist)
     (when (llm-chat-prompt-temperature prompt)
-      (push `("temperature" . ,(/ (llm-chat-prompt-temperature prompt) 2.0)) 
request-alist))
+      (push `("temperature" . ,(* (llm-chat-prompt-temperature prompt) 2.0)) 
request-alist))
     (when (llm-chat-prompt-max-tokens prompt)
       (push `("max_tokens" . ,(llm-chat-prompt-max-tokens prompt)) 
request-alist))
     (when (llm-chat-prompt-functions prompt)



reply via email to

[Prev in Thread] Current Thread [Next in Thread]