[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [emacs-tangents] Enhancing ELisp for AI Work
From: |
Jean Louis |
Subject: |
Re: [emacs-tangents] Enhancing ELisp for AI Work |
Date: |
Sat, 4 Jan 2025 21:09:55 +0300 |
User-agent: |
Mutt/2.2.12 (2023-09-09) |
* Andreas Röhler <andreas.roehler@easy-emacs.de> [2025-01-04 13:44]:
> Hi Jean,
>
> tried your code delivered at
> https://lists.gnu.org/archive/html/help-gnu-emacs/2024-12/msg00363.html
>
> which works nicely, thanks!
>
> Notably it's much smaller than the stuff seen so far.
>
> Is there a repo for it?
I don't use git.
> Maybe some tweaks be of interest for other too.
I am glad that it works for you. I am attaching the full library which
I am using actively. You feel free of course to modify it as you
wish. Those functions beyond the database were just for my learning
stage. I am using database based model and API key settings:
20 Qwen/Qwen2.5-Coder-32B-Instruct, HuggingFace,
https://api-inference.huggingface.co/models/Qwen/Qwen2.5-Coder-32B-Instruct/v1/chat/completions
21 rocket-3b.Q4_K_M.llamafile, llama.cpp,
http://192.168.188.140:8080/v1/chat/completions
22 Mistral-Nemo-Base-2407, llama.cpp,
https://api-inference.huggingface.co/models/mistralai/Mistral-Nemo-Base-2407
23 mistralai/Mistral-Nemo-Instruct-2407, HuggingFace,
https://api-inference.huggingface.co/models/mistralai/Mistral-Nemo-Instruct-2407/v1/chat/completions
24 Phi-3.5-mini-instruct-Q3_K_M.gguf, llama.cpp,
http://192.168.188.140:8080/v1/chat/completions
25 mistral-7b-v0.1.Q5_K_M.gguf, llama.cpp,
http://127.0.0.1:8080/v1/chat/completions
26 Phi-3.5-mini-instruct-Q3_K_M.gguf, llama.cpp,
http://127.0.0.1:8080/v1/chat/completions
27 bling-phi-3.5.gguf, llama.cpp,
http://192.168.188.140:8080/v1/chat/completions
28 granite-3.1-2b-instruct-Q5_K.gguf, llama.cpp,
http://192.168.188.140:8080/v1/chat/completions
29 Qwen2.5-7B-Instruct_Q3_K_M.gguf, llama.cpp,
http://192.168.188.140:8080/v1/chat/completions
30 Qwen2.5-1.5B-Instruct, llama.cpp,
http://192.168.188.140:8080/v1/chat/completions
So basically I am editing settings in the database for each model. I
cannot think of using Emacs variables for huge number of models, my
entry looks like following and it works well.
ID 30
UUID "09834f52-e601-40e2-8e4e-e6814de72f81"
Date created "2025-01-02 23:07:25.345686+03"
Date modified "2025-01-02 23:13:35.102727+03"
User created "maddox"
User modified "maddox"
Model "Qwen2.5-1.5B-Instruct"
Description nil
Hyperdocument nil
LLM Endpoint
"http://192.168.188.140:8080/v1/chat/completions"
User "Jean Louis"
Rank 0
Model's nick "LLM: "
Temperature 0.6
Max tokens 2048
Top-p 0.85
Top-k 30.0
Min-p 0.1
System message "You are helpful assistant."
I am using Emacs functions which serve in the end as "AI agents", a
function can iterate over some entries in the database and provide
descriptions, here is practical example:
(defun rcd-db-describe-countries ()
"Use this function to describe the whole table `countries'."
(interactive)
(let* ((id (rcd-sql-first "SELECT countries_id
FROM countries
WHERE countries_description IS NULL
ORDER BY countries_id"
rcd-db))
(country (rcd-db-get-entry "countries" "countries_name" id rcd-db))
(prompt (format "Describe the country: %s" country))
(description (rcd-llm prompt)))
(when description
(rcd-db-update-entry "countries" "countries_description" id description
rcd-db)
(rcd-message "%s" description))))
Then:
(run-with-timer 10 20 'rcd-db-describe-countries)
or you can run with idle timer!
and I get entries like:
Austria is a country located in Central Europe. It has a population of about 9
million people and covers an area of about 83, 879 square kilometers. The
capital city is Vienna, which is also its largest city and cultural and
economic center. Other major cities include Graz, Linz, and Innsbruck.
Austria is known for its rich history and culture, which is reflected in its
architecture, museums, and festivals. It is also famous for its food,
especially its cheese and meat dishes.
Austria is a member of the European Union and is part of the Schengen Area,
which means that its citizens do not have to hold a passport to travel to other
European countries. It is also a member of NATO and is a landlocked country.
Those entries later I can use in a dashboard, like when viewing a
profile of customer, I can click on the country to see more
information about it on instant.
It runs in background all the time on the low-end Nvidia GTX 1050 Ti 4
GB RAM, but I would like to get GTX 3090 with 25 GB RAM soon somewhere
somehow. And I have 16 GB RAM.
I am using full free software models like Qwen-1.5, these work very well:
21 rocket-3b.Q4_K_M.llamafile, llama.cpp,
http://192.168.188.140:8080/v1/chat/completions
22 Mistral-Nemo-Base-2407, llama.cpp,
https://api-inference.huggingface.co/models/mistralai/Mistral-Nemo-Base-2407
23 mistralai/Mistral-Nemo-Instruct-2407, HuggingFace,
https://api-inference.huggingface.co/models/mistralai/Mistral-Nemo-Instruct-2407/v1/chat/completions
24 Phi-3.5-mini-instruct-Q3_K_M.gguf, llama.cpp,
http://192.168.188.140:8080/v1/chat/completions
25 mistral-7b-v0.1.Q5_K_M.gguf, llama.cpp,
http://127.0.0.1:8080/v1/chat/completions
26 Phi-3.5-mini-instruct-Q3_K_M.gguf, llama.cpp,
http://127.0.0.1:8080/v1/chat/completions
27 bling-phi-3.5.gguf, llama.cpp,
http://192.168.188.140:8080/v1/chat/completions
28 granite-3.1-2b-instruct-Q5_K.gguf, llama.cpp,
http://192.168.188.140:8080/v1/chat/completions
29 Qwen2.5-7B-Instruct_Q3_K_M.gguf, llama.cpp,
http://192.168.188.140:8080/v1/chat/completions
30 Qwen2.5-1.5B-Instruct, llama.cpp,
http://192.168.188.140:8080/v1/chat/completions
If you are using it locally, models like Phi-3.5-mini under MIT
license from Microsoft (wow!) has most quality that I know and fastest
is Qwen2.5-1.5B which I use to generate meaningful keyword for 1500+
website pages.
Keywords are generated as Emacs Lisp list:
("screens" "being connected together" "feeding rate" "approximately 5-6 tonnes
per hour" "welding" "screws" "gold particles" "sluice" "effectively separate
gold particles" "sluice" "retract other materials" "screens" "reusable"
"screens" "cost efficiency" "utilize screws instead of welding")
They may be repetitive, but what matters is that it is pretty nicely
formatted. Prompt is complicated, but it works pretty well, most of
time.
Those which come out wrong sometimes, can easily and automatically be
connected.
Why that? Well when I know which important keywords relate to some
website page, later I can use PostgreSQL trigram functions to find
similar keywords in other pages, and relate those pages for linking.
Once related, the pages will have keywords inside of the text and
related pages related to those keywords.
When I process the website, no matter the markup, before processing, I
can insert those links without my supervision and special editing one
by one.
For example this text would get linked over the words "cost
efficiency" to some page www.example.com automatically, without my
attention, on the fly, before Markdown, Asciidoctor or Org Mode or
other markup is converted to HTML:
"The company struggled to achieve cost efficiency while trying to
increase production."
Linked pages contribute to the overall understanding of products and
services on a website by providing additional information and context
for the main content. It helps in guiding clients to the products or
services.
IMHO it is better for programmers to use their own functions to
request LLM responses as that way you get more freedom, rather than
trying to accommodate yourself to existing pretty large libraries like
gptel or chatgpt-emacs something.
Local models such as Phi-3.5-mini and Qwen2.5-1.5B, among others, are
notably efficient and encompass a vast amount of data. They are
beneficial for education and understanding of information. However,
these models are not intended for accuracy, and users must recognize
that they simply store information rather than perform actual thought
or intelligence. The term "artificial intelligence" is somewhat
misleading as it implies some kind of thinking, but it’s appropriate
as long as one understands "artificial" in the context of
non-intelligent computation. These models generate text through
statistical analysis of tensors without any conscious decision-making,
which differs from true thinking and intelligence. The true thinking
relies on an innate "survival" principle that computers lack.
The information produced by an LLM, which might seem nonsensical to
humans, was generated with the same values and worthiness as the
information that seemed reasonable to humans. This is deceptive, as
humans are misled by the work of an LLM, even though it merely
replicates human behavior.
When a foreigner learns basic phrases like "hello", "how are you",
"thank you", and "good bye", locals might mistakenly believe they know
Chinese. In reality, this doesn't imply the speaker understands the
language. The receiver of communication often interprets the speaker's
few words as part of the language.
Same with the LLM. It is mimicking and human thinks "wow, it can
interact with me, it thinks". It is an illusion.
ChatGPT is bullshit | Ethics and Information Technology
https://link.springer.com/article/10.1007/s10676-024-09775-5
In my opinion we shall open up GNU project and adopt some of the fully
free LLM models and build on it.
--
Jean Louis
rcd-llm-without-api-keys.el
Description: Text document