[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[nongnu] elpa/gptel 26b515a8f4 4/4: gptel: Add Cerebras support (#372)
From: |
ELPA Syncer |
Subject: |
[nongnu] elpa/gptel 26b515a8f4 4/4: gptel: Add Cerebras support (#372) |
Date: |
Thu, 5 Sep 2024 15:59:47 -0400 (EDT) |
branch: elpa/gptel
commit 26b515a8f4ef61abd5a81b3c622bf00fe73261a7
Author: Ian <180469889+bytegorilla@users.noreply.github.com>
Commit: GitHub <noreply@github.com>
gptel: Add Cerebras support (#372)
* README.org: Add Cerebras to the supported providers. Fixing
typos in the README
* gptel.el: Mention Cerebras in the package commentary.
---
README.org | 37 +++++++++++++++++++++++++++++++++++++
gptel.el | 8 ++++----
2 files changed, 41 insertions(+), 4 deletions(-)
diff --git a/README.org b/README.org
index ef4c2df0a4..d7076c0d23 100644
--- a/README.org
+++ b/README.org
@@ -24,6 +24,7 @@ gptel is a simple Large Language Model chat client for Emacs,
with support for m
| OpenRouter | ✓ | [[https://openrouter.ai/keys][API key]]
|
| PrivateGPT | ✓ |
[[https://github.com/zylon-ai/private-gpt#-documentation][PrivateGPT running
locally]] |
| DeepSeek | ✓ | [[https://platform.deepseek.com/api_keys][API
key]] |
+| Cerebras | ✓ | [[https://cloud.cerebras.ai/][API key]]
|
#+html: </div>
*General usage*: ([[https://www.youtube.com/watch?v=bsRnh_brggM][YouTube
Demo]])
@@ -69,6 +70,7 @@ gptel uses Curl if available, but falls back to url-retrieve
to work without ext
- [[#openrouter][OpenRouter]]
- [[#privategpt][PrivateGPT]]
- [[#deepseek][DeepSeek]]
+ - [[#cerebras][Cerebras]]
- [[#usage][Usage]]
- [[#in-any-buffer][In any buffer:]]
- [[#in-a-dedicated-chat-buffer][In a dedicated chat buffer:]]
@@ -638,6 +640,41 @@ The above code makes the backend available to select. If
you want it to be the
#+end_src
+#+html: </details>
+#+html: <details><summary>
+**** Cerebras
+#+html: </summary>
+
+Register a backend with
+#+begin_src emacs-lisp
+;; Cerebras offers an instant OpenAI compatible API
+(gptel-make-openai "Cerebras"
+ :host "api.cerebras.ai"
+ :endpoint "/v1/chat/completions"
+ :stream t ;optionally nil as Cerebras is instant
AI
+ :key "your-api-key" ;can be a function that returns the key
+ :models '("llama3.1-70b"
+ "llama3.1-8b"))
+#+end_src
+
+You can pick this backend from the menu when using gptel (see
[[#usage][Usage]]).
+
+***** (Optional) Set as the default gptel backend
+
+The above code makes the backend available to select. If you want it to be
the default backend for gptel, you can set this as the value of
=gptel-backend=. Use this instead of the above.
+#+begin_src emacs-lisp
+;; OPTIONAL configuration
+(setq gptel-model "llama3.1-8b"
+ gptel-backend
+ (gptel-make-openai "Cerebras"
+ :host "api.cerebras.ai"
+ :endpoint "/v1/chat/completions"
+ :stream nil
+ :key "your-api-key"
+ :models '("llama3.1-70b"
+ "llama3.1-8b")))
+#+end_src
+
#+html: </details>
** Usage
diff --git a/gptel.el b/gptel.el
index 9f947dd282..88892ebe4d 100644
--- a/gptel.el
+++ b/gptel.el
@@ -32,8 +32,8 @@
;; gptel supports
;;
;; - The services ChatGPT, Azure, Gemini, Anthropic AI, Anyscale, Together.ai,
-;; Perplexity, Anyscale, OpenRouter, Groq, PrivateGPT, DeepSeek and Kagi
-;; (FastGPT & Summarizer)
+;; Perplexity, Anyscale, OpenRouter, Groq, PrivateGPT, DeepSeek, Cerebras and
+;; Kagi (FastGPT & Summarizer)
;; - Local models via Ollama, Llama.cpp, Llamafiles or GPT4All
;;
;; Additionally, any LLM service (local or remote) that provides an
@@ -61,8 +61,8 @@
;; - For Gemini: define a gptel-backend with `gptel-make-gemini', which see.
;; - For Anthropic (Claude): define a gptel-backend with
`gptel-make-anthropic',
;; which see
-;; - For Together.ai, Anyscale, Perplexity, Groq, OpenRouter or DeepSeek:
define
-;; a gptel-backend with `gptel-make-openai', which see.
+;; - For Together.ai, Anyscale, Perplexity, Groq, OpenRouter, DeepSeek or
+;; Cerebras: define a gptel-backend with `gptel-make-openai', which see.
;; - For PrivateGPT: define a backend with `gptel-make-privategpt', which see.
;; - For Kagi: define a gptel-backend with `gptel-make-kagi', which see.
;;