Difference between revisions of "Ollama installation"

From Notes_Wiki
m
m
Line 6: Line 6:
#:<pre>
#:<pre>
#:: curl -fsSL https://ollama.com/install.sh | sh
#:: curl -fsSL https://ollama.com/install.sh | sh
#:</pre>
# Check whether ollama service is running via:
#:<pre>
#:: systemctl status ollama
#:</pre>
#:</pre>
# After this run ollama on local system via below commands and test:
# After this run ollama on local system via below commands and test:
Line 11: Line 15:
#:: ollama run deepseek-r1:1.5b
#:: ollama run deepseek-r1:1.5b
#:</pre>
#:</pre>
#:: The above command may take considerable time when run for first time as it download the entire model.  llama3.2 model in above example is 2GB in size.  So first the command will download 2GB model and only after that we can prompt ">>>" to type our queries.
#:: The above command may take considerable time when run for first time as it download the entire model.  deepseek-r1:1.5b model in above example is 1.1GB in size.  So first the command will download 1.1GB model and only after that we can prompt ">>>" to type our queries.
# You can consider moving ~/.ollama path to other location and create a symbolic link as ollama models might need lot of space to work
# The ollama service runs as ollama user with home folder as =/usr/share/ollama=  Consider moving /usr/share/ollama to partition with more space and create symoblic link at the original place.
# To close the system use:
# To close the system use:
#:<pre>
#:<pre>
Line 22: Line 26:
#:</pre>
#:</pre>


Note that models are downloaded in current users home folder inside .ollama folder.  Ensure you have enough space in this partition / folder for models to get downloaded and stored.
=Configure ollama to automatically run on system boot=
# To configure ollama as service create file '<tt>/etc/systemd/system/ollama.service</tt>' with following contents:<source type="bash">
[Unit]
Description=Ollama Service
After=network-online.target
[Service]
ExecStart=/usr/bin/ollama serve
User=saurabh
Group=saurabh
Restart=always
RestartSec=3
Environment="PATH=$PATH"
Environment="OLLAMA_ORIGINS=moz-extension://*"
[Install]
WantedBy=default.target
</source>
# Reload systemctl configuration and configure ollama to run with system boot:
#:<pre>
#:: sudo systemctl daemon-reload
#:: sudo systemctl enable ollama
#:: sudo systemctl start ollama
#:: sudo systemctl status ollama
#:</pre>
Refer:
* https://github.com/ollama/ollama/blob/main/docs/linux.md
* https://adasci.org/hands-on-guide-to-running-llms-locally-using-ollama/




[[Main Page|Home]] > [[Local system based AI tools]] > [[Ollama]] > [[Ollama installation]]
[[Main Page|Home]] > [[Local system based AI tools]] > [[Ollama]] > [[Ollama installation]]

Revision as of 09:42, 5 July 2025

Home > Local system based AI tools > Ollama > Ollama installation

Installing ollama on Rocky 9.x

To install Ollama on local system use following steps:

  1. Install ollama directly from site using:
    curl -fsSL https://ollama.com/install.sh | sh
  2. Check whether ollama service is running via:
    systemctl status ollama
  3. After this run ollama on local system via below commands and test:
    ollama run deepseek-r1:1.5b
    The above command may take considerable time when run for first time as it download the entire model. deepseek-r1:1.5b model in above example is 1.1GB in size. So first the command will download 1.1GB model and only after that we can prompt ">>>" to type our queries.
  4. The ollama service runs as ollama user with home folder as =/usr/share/ollama= Consider moving /usr/share/ollama to partition with more space and create symoblic link at the original place.
  5. To close the system use:
    /bye
  6. Between any two queries which are unrelated we can clear the context using:
    /clear


Home > Local system based AI tools > Ollama > Ollama installation