How to use a local AI model to answer SMS messages
Downloading a GGUF model from Huggingface (video instructions)
The video below walks you through all the steps in motion, starting with navigating to the website, all the way to moving the LLM file to the right folder. It's a worthwhile use of your time, since it takes a mere 75 seconds to watch, but captures every important step in detail.
Downloading a GGUF model from Huggingface (step-by-step guide)
In your browser, enter huggingface.co in the searchbar. You should land on a page similar to Figure 1.
Select your preferred LLM model, as shown in Figure 3. For the sake of this tutorial, we'll be using the following model:
Meta-Llama-3-3.1-8B-Instruct-hf-Q4_K_M-GGUF
On the page of the model, open the Files and versions tab, and look for the file with a .gguf format, highlighted in red in Figure 4.
Download the .gguf file, and move or paste it to the following folder (Figure 5):
C:\AIModels
How to configure local AI chatbot in Ozeki SMS Gateway
How to configure SMPP client connection for AI SMS
Send test SMS message what is answered by AI chatbot
More information
- How to use ChatGPT to answer SMS messages
- How to use a local AI model to answer SMS messages
- How to use ChatGPT to answer Whatsapp messages
- How to use a local AI model to answer Whatsapp messages