Difference between revisions of "Ollama installation"
From Notes_Wiki
m |
m |
||
Line 17: | Line 17: | ||
#:: The above command may take considerable time when run for first time as it download the entire model. deepseek-r1:1.5b model in above example is 1.1GB in size. So first the command will download 1.1GB model and only after that we can prompt ">>>" to type our queries. | #:: The above command may take considerable time when run for first time as it download the entire model. deepseek-r1:1.5b model in above example is 1.1GB in size. So first the command will download 1.1GB model and only after that we can prompt ">>>" to type our queries. | ||
# The ollama service runs as ollama user with home folder as <tt>/usr/share/ollama</tt>. Consider moving /usr/share/ollama to partition with more space and create symoblic link at the original place. | # The ollama service runs as ollama user with home folder as <tt>/usr/share/ollama</tt>. Consider moving /usr/share/ollama to partition with more space and create symoblic link at the original place. | ||
# Edit /etc/systemd/system/ollama.service and append one more Environment condition | |||
#:<pre> | |||
#:: Environment="OLLAMA_HOST=0.0.0.0" | |||
#:</pre> | |||
# Restart service | |||
#:<pre> | |||
#:: systemctl daemon-reload | |||
#:: systemctl restart ollama | |||
#:</pre> | |||
#:: Without above we cannot use Ollama from n8n etc. over http://localhost:11434/ | |||
# To close the system use: | # To close the system use: | ||
#:<pre> | #:<pre> |
Revision as of 13:16, 5 July 2025
Home > Local system based AI tools > Ollama > Ollama installation
Installing ollama on Rocky 9.x
To install Ollama on local system use following steps:
- Install ollama directly from site using:
- curl -fsSL https://ollama.com/install.sh | sh
- Check whether ollama service is running via:
- systemctl status ollama
- After this run ollama on local system via below commands and test:
- ollama run deepseek-r1:1.5b
-
- The above command may take considerable time when run for first time as it download the entire model. deepseek-r1:1.5b model in above example is 1.1GB in size. So first the command will download 1.1GB model and only after that we can prompt ">>>" to type our queries.
- The ollama service runs as ollama user with home folder as /usr/share/ollama. Consider moving /usr/share/ollama to partition with more space and create symoblic link at the original place.
- Edit /etc/systemd/system/ollama.service and append one more Environment condition
- Environment="OLLAMA_HOST=0.0.0.0"
- Restart service
- systemctl daemon-reload
- systemctl restart ollama
-
- Without above we cannot use Ollama from n8n etc. over http://localhost:11434/
- To close the system use:
- /bye
- Between any two queries which are unrelated we can clear the context using:
-
- /clear
-
Home > Local system based AI tools > Ollama > Ollama installation