emacs-elpa-diffs
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[elpa] externals/llm fb6af64b2f: Added documentation for llm-chat-token-


From: ELPA Syncer
Subject: [elpa] externals/llm fb6af64b2f: Added documentation for llm-chat-token-limit
Date: Sun, 21 Jan 2024 12:58:19 -0500 (EST)

branch: externals/llm
commit fb6af64b2f63f479d9a09f828d145f1005f24e51
Author: Andrew Hyatt <ahyatt@gmail.com>
Commit: Andrew Hyatt <ahyatt@gmail.com>

    Added documentation for llm-chat-token-limit
---
 README.org | 1 +
 1 file changed, 1 insertion(+)

diff --git a/README.org b/README.org
index 42aeba8556..2c0ce50801 100644
--- a/README.org
+++ b/README.org
@@ -99,6 +99,7 @@ For all callbacks, the callback will be executed in the 
buffer the function was
 - ~llm-count-tokens provider string~: Count how many tokens are in ~string~.  
This may vary by ~provider~, because some provideres implement an API for this, 
but typically is always about the same.  This gives an estimate if the provider 
has no API support.
 - ~llm-cancel-request request~ Cancels the given request, if possible.  The 
~request~ object is the return value of async and streaming functions.
 - ~llm-name provider~.  Provides a short name of the model or provider, 
suitable for showing to users.
+- ~llm-chat-token-limit~.  Gets the token limit for the chat model.  This 
isn't possible for some backends like =llama.cpp=, in which the model isn't 
selected or known by this library.
 
   And the following helper functions:
   - ~llm-make-simple-chat-prompt text~: For the common case of just wanting a 
simple text prompt without the richness that ~llm-chat-prompt~ struct provides, 
use this to turn a string into a ~llm-chat-prompt~ that can be passed to the 
main functions above.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]