[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[elpa] externals/llm 8f30feb5c1 32/34: README improvements, including no
From: |
Andrew Hyatt |
Subject: |
[elpa] externals/llm 8f30feb5c1 32/34: README improvements, including noting the nonfree llm warning |
Date: |
Sat, 16 Sep 2023 01:32:50 -0400 (EDT) |
branch: externals/llm
commit 8f30feb5c1a209f7280fd468a2fe4030434a0e81
Author: Andrew Hyatt <ahyatt@gmail.com>
Commit: Andrew Hyatt <ahyatt@gmail.com>
README improvements, including noting the nonfree llm warning
Also, remove the somewhat duplicated section about different providers.
Require the right provider in the example setup.
---
README.org | 28 ++++++++++++++++++++--------
1 file changed, 20 insertions(+), 8 deletions(-)
diff --git a/README.org b/README.org
index b9047e8103..a4f1b1a6da 100644
--- a/README.org
+++ b/README.org
@@ -13,7 +13,9 @@ Users who use an application that uses this package should
not need to install i
#+begin_src emacs-lisp
(use-package llm-refactoring
- :init (setq llm-refactoring-provider (make-llm-openai :key my-openai-key))
+ :init
+ (require 'llm-openai)
+ (setq llm-refactoring-provider (make-llm-openai :key my-openai-key))
#+end_src
Here ~my-openai-key~ would be a variable you set up before with your Open AI
key. Or, just substitute the key itself as a string. It's important that you
remember never to check your key into a public repository such as github,
because your key must be kept private. Anyone with your key can use the API,
and you will be charged.
@@ -31,8 +33,24 @@ You can set up with ~make-llm-vertex~, with the following
parameters:
In addition to the provider, which you may want multiple of (for example, to
charge against different projects), there are customizable variables:
- ~llm-vertex-gcloud-binary~: The binary to use for generating the API key.
- ~llm-vertex-gcloud-region~: The gcloud region to use. It's good to set this
to a region near where you are for best latency. Defaults to "us-central1".
+** Fake
+This is a client that makes no call, but it just there for testing and
debugging. Mostly this is of use to programmatic clients of the llm package,
but end users can also use it to understand what will be sent to the LLMs. It
has the following parameters:
+- ~:output-to-buffer~: if non-nil, the buffer or buffer name to append the
request sent to the LLM to.
+- ~:chat-action-func~: a function that will be called to provide a string or
symbol and message cons which are used to raise an error.
+- ~:embedding-action-func~: a function that will be called to provide a vector
or symbol and message cons which are used to raise an error.
+* =llm= and the use of non-free LLMs
+The =llm= package is part of GNU Emacs by being part of GNU ELPA.
Unfortunately, the most popular LLMs in use are non-free, which is not what GNU
software should be promoting by inclusion. On the other hand, by use of the
=llm= package, the user can make sure that any client that codes against it
will work with free models that come along. It's likely that sophisticated
free LLMs will, emerge, although it's unclear right now what free software
means with respsect to LLMs. Because of [...]
+
+To build upon the example from before:
+#+begin_src emacs-lisp
+(use-package llm-refactoring
+ :init
+ (require 'llm-openai)
+ (setq llm-refactoring-provider (make-llm-openai :key my-openai-key)
+ llm-warn-on-nonfree nil)
+#+end_src
* Programmatic use
-Client applications should require the module, =llm=, and code against it.
Most functions are generic, and take a struct representing a provider as the
first argument. The client code, or the user themselves can then require the
specific module, such as =llm-openai=, and create a provider with a function
such as ~(make-llm-openai :key user-api-key)~. The client application will use
this provider to call all the generic functions.
+Client applications should require the =llm= package, and code against it.
Most functions are generic, and take a struct representing a provider as the
first argument. The client code, or the user themselves can then require the
specific module, such as =llm-openai=, and create a provider with a function
such as ~(make-llm-openai :key user-api-key)~. The client application will use
this provider to call all the generic functions.
A list of all the functions:
@@ -40,11 +58,5 @@ A list of all the functions:
- ~llm-chat-async provider prompt response-callback error-callback~: Same as
~llm-chat~, but executes in the background. Takes a ~response-callback~ which
will be called with the text response. The ~error-callback~ will be called in
case of error, with the error symbol and an error message.
- ~llm-embedding provider string~: With the user-chosen ~provider~, send a
string and get an embedding, which is a large vector of floating point values.
The embedding represents the semantic meaning of the string, and the vector can
be compared against other vectors, where smaller distances between the vectors
represent greater semantic similarity.
- ~llm-embedding-async provider string vector-callback error-callback~: Same
as ~llm-embedding~ but this is processed asynchronously. ~vector-callback~ is
called with the vector embedding, and, in case of error, ~error-callback~ is
called with the same arguments as in ~llm-chat-async~.
-
-All of the providers currently implemented.
-
-- =llm-openai=. This is the interface to Open AI's Chat GPT. The user must
set their key, and select their preferred chat and embedding model.
-- =llm-vertex=. This is the interface to Google Cloud's Vertex API. The user
needs to set their project number. In addition, to get authenticated, the user
must have logged in initially, and have a valid path in
~llm-vertex-gcloud-binary~. Users can also configure
~llm-vertex-gcloud-region~ for using a region closer to their location. It
defaults to ="us-central1"= The provider can also contain the user's chosen
embedding and chat model.
-- =llm-fake=. This is a provider that is useful for developers using this
library, to be able to understand what is being sent to the =llm= library
without actually sending anything over the wire.
* Contributions
If you are interested in creating a provider, please send a pull request, or
open a bug. This library is part of GNU ELPA, so any major provider that we
include in this module needs to be written by someone with FSF papers.
However, you can always write a module and put it on a different package
archive, such as MELPA.
- [elpa] externals/llm 9a3fc01cac 17/34: Switch from generic to per-provider sync solution, (continued)
- [elpa] externals/llm 9a3fc01cac 17/34: Switch from generic to per-provider sync solution, Andrew Hyatt, 2023/09/16
- [elpa] externals/llm eba797b295 04/34: Implement error handling for gcloud auth issues, Andrew Hyatt, 2023/09/16
- [elpa] externals/llm 16ee85fd11 05/34: Add async options, and made the sync options just use those and wait, Andrew Hyatt, 2023/09/16
- [elpa] externals/llm 3919b77383 06/34: Implement confusion and typos in README.org, Andrew Hyatt, 2023/09/16
- [elpa] externals/llm b52958757a 18/34: Fix docstring wider than 80 characters in llm-vertex, Andrew Hyatt, 2023/09/16
- [elpa] externals/llm abbff2aa9d 23/34: Change method name to llm-chat (without "-response"), update README, Andrew Hyatt, 2023/09/16
- [elpa] externals/llm e94bc937c7 27/34: Fix issue with llm-chat before method having too many arguments, Andrew Hyatt, 2023/09/16
- [elpa] externals/llm 7edd36b2dc 28/34: Fix obsolete or incorrect function calls in llm-fake, Andrew Hyatt, 2023/09/16
- [elpa] externals/llm d4bbe9d84c 29/34: Fix incorrect requires in openai and vertex implementations, Andrew Hyatt, 2023/09/16
- [elpa] externals/llm 723c0b3786 31/34: Minor README whitespace and formatting fixes, Andrew Hyatt, 2023/09/16
- [elpa] externals/llm 8f30feb5c1 32/34: README improvements, including noting the nonfree llm warning,
Andrew Hyatt <=
- [elpa] externals/llm 444850a981 24/34: Fix missing word in non-free warning message, Andrew Hyatt, 2023/09/16
- [elpa] externals/llm 0ed280c208 15/34: Add llm-fake, useful for developer testing using the llm methods, Andrew Hyatt, 2023/09/16
- [elpa] externals/llm c55ccf157a 03/34: Clean up package specifications in elisp files, Andrew Hyatt, 2023/09/16
- [elpa] externals/llm 414d25a625 09/34: Removed various unused things, and format fixes, Andrew Hyatt, 2023/09/16
- [elpa] externals/llm 4e9be8183d 07/34: Merge branch 'async', Andrew Hyatt, 2023/09/16
- [elpa] externals/llm dd20d6353c 21/34: Fix bug on llm-fake's error response to chat-response, Andrew Hyatt, 2023/09/16
- [elpa] externals/llm 40151757de 26/34: Switch to a method of nonfree warnings easier for provider modules, Andrew Hyatt, 2023/09/16
- [elpa] externals/llm ba65755326 30/34: Improve the README with information on providers for end-users, Andrew Hyatt, 2023/09/16