CentOS 7.x Rocks cluster 7.0 Build master server

From Notes_Wiki
Revision as of 10:26, 19 May 2023 by Saurabh (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Home > CentOS > CentOS 7.x > CentOS 7.x Rocks cluster 7.0 > CentOS 7.x Rocks cluster 7.0 Build master server

Minimum requirement

  • Should have atleast two network interfaces
    • eth0 - First interface connected to private network
    • eth1 - Second interface connected to public network
  • Hostname must set while installing. This FQDN should resolve to master node public IP via DNS.
    The hostname cannot be frontend.<domain-name>. Any other name other than frontend esp. master, rocksmaster, etc. seems to work fine.
  • Most of the space on master should go to /export partition / folder
    If you want to reserve space for other purposes eg backups then create /mnt/data1 etc. appropriately with rest of the space


Installation of Rocks Frontend server

To setup a rocks master node use:

  1. Boot with kernel roll CD. This is
  2. Select "Install Rocks 7.0"
  3. Choose Language "English'
  4. In the next screen select appropriate timezone. Optionally disable kdump and SELinux.
  5. In the system area
    1. choose "NETWORK & HOSTNAME"
      1. Choose eth1 for public ipaddress and enter IPaddress, netmask, gateway, DNS etc. information
        In General tab select "Automatically connect to this network when available"
      2. Hostname section, Enter FQDN name. Ex: rocksmaster.rnd.com DNS server must resolve this FQDN to IP given to public interface.
      3. No need to assign any private IP address on this page.
      4. Click done
    2. Choose "INSTALLATION DESTINATION"
      1. You can choose to configure manual partitions
      2. Click on the link above to create partitions automatically
      3. Rename /home to /export
        Basically Create /export/ directory with maximum possible space.
      4. Click done to accept configured partitioning
  6. In the Rocks Cluster Config section
    1. Choose "CLUSTER PRIVATE NETWORK"
      1. Enter private domain name or leave default .local
      2. Private IPAddress & Netmask. Ideally this should not be duplicated anywhere in organization LAN network.
      3. Click done
    2. Choose "ROCKS ROLLS"
      1. Enter rolls server address such as http://rolls.rnd.com/install/rolls/
      2. Click on "List Available Rolls"
      3. Select all 17 rolls carefully. Do not miss any role. Adding them later on creates a different type of master image with different no. of packages.
      4. Click on "Add Selected Rolls"
      5. Click done.
    3. Choose "CLUSTER CONFIG"
      1. Verify FQDN
      2. Enter cluster name
      3. Verify Private IPaddress details
      4. Click done
  7. Click on "Begin Installation"
  8. Enter root password
  9. Once Installation is done click "Reboot" button to reboot node
  10. After reboot login with username root and configured password.

Refer:


Enable http access on master from public network

By default we cannot access the master web page hosting documentation and ganglia monitoring from LAN. Normally it is restricted to access from master node itself using localhost. But to enable the access for all LAN users use:

  1. Login into master node as root user and run following commands:
    rocks remove firewall host=localhost rulename=A40-WWW-PUBLIC-LAN
    rocks add firewall host=localhost network=public protocol=tcp service=www chain=INPUT \
    action=ACCEPT flags="-m state --state NEW --source 0.0.0.0/0.0.0.0" \
    rulename=A40-WWW-PUBLIC-NEW
    rocks report host firewall localhost
    rocks sync host firewall localhost
  2. After this opening http://<master-fqdn> should work from public (LAN) network.

Refer:


Configure disk quotas

Most of the user space on compute will come from master node /export directory. Expecially /export/home and /export/apps are made available to compute nodes for their operations. Hence to ensure fare usage it makes sense to implement some kind of user quota on this filesystems. See Basic disk quota configuration


Configure NTP server

Configure NTP server on master using Configure basic ntp server and client


Configure history retention

It is important to store command line history for more no. of lines along with timestamps on the cluster. To configure same on master use Storing date / time along with commands in history


Configure no_root_squash for /share/apps

It is possible that we need to write to /share/apps from other nodes. Eg if we are installing some Nvidia graphics card related cuda library in /share/apps while master node does not has any Nvidia card. In such cases it makes sense to make /share/apps writable by other nodes.

To achieve the same edit '/etc/exports' and configure no_root_squash option for the corresponding compute node (or perhaps for all nodes). The same can be done for other NFS exports also.

Then on compute we can do following as root to validate the no_root_squash setting is working or not:

      umount /share/apps
      cd /share/apps/<sub-folder>
      touch a
      ls -l

The file a should be created in the sub-folder as root:root and not as nfsnobody:nfsnobody



Home > CentOS > CentOS 7.x > CentOS 7.x Rocks cluster 7.0 > CentOS 7.x Rocks cluster 7.0 Build master server