emacs-elpa-diffs
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[nongnu] elpa/gptel d177822a7f 01/11: README: Add a section about select


From: ELPA Syncer
Subject: [nongnu] elpa/gptel d177822a7f 01/11: README: Add a section about selecting backends explicitly (#467)
Date: Sat, 30 Nov 2024 07:00:04 -0500 (EST)

branch: elpa/gptel
commit d177822a7f185d34f67f0ce71ef1c9643ed21a6d
Author: Martin Blais <blais@furius.ca>
Commit: GitHub <noreply@github.com>

    README: Add a section about selecting backends explicitly (#467)
    
    * README: Add an explicit section explaining how to interactively select 
your backend and model.
---
 README.org | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/README.org b/README.org
index 96176ca684..7c640ef828 100644
--- a/README.org
+++ b/README.org
@@ -88,6 +88,7 @@ gptel uses Curl if available, but falls back to url-retrieve 
to work without ext
     - [[#in-a-dedicated-chat-buffer][In a dedicated chat buffer:]]
       - [[#including-media-images-documents-with-requests][Including media 
(images, documents) with requests]]
       - [[#save-and-restore-your-chat-sessions][Save and restore your chat 
sessions]]
+    - [[#selecting-a-backend][Selecting a backend]]
     - [[#include-more-context-with-requests][Include more context with 
requests]]
     - [[#rewrite-refactor-or-fill-in-a-region][Rewrite, refactor or fill in a 
region]]
     - [[#extra-org-mode-conveniences][Extra Org mode conveniences]]
@@ -952,6 +953,10 @@ Similar criteria apply to Markdown chat buffers.
 Saving the file will save the state of the conversation as well.  To resume 
the chat, open the file and turn on =gptel-mode= before editing the buffer.
 
 #+html: </details>
+*** Selecting a backend
+
+Selecting a model or backend can be done interactively via the =-m= command of 
=gptel-menu=. Available registered models are prefixed by the name of their 
backend with a string like `ChatGPT:gpt-4o-mini`, where `ChatGPT` is the 
backend name you used to register it and `gpt-4o-mini` is the name of the model.
+
 *** Include more context with requests
 
 By default, gptel will query the LLM with the active region or the buffer 
contents up to the cursor.  Often it can be helpful to provide the LLM with 
additional context from outside the current buffer. For example, when you're in 
a chat buffer but want to ask questions about a (possibly changing) code buffer 
and auxiliary project files.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]