How to get llama 2 up and running , in a VM, with no GPU, and limited memory on Ubuntu

Image
OK I decided to write this up after unsuccessfully being able to find all the required info I needed in one place. In this setup we will be using Ubuntu Server 22.04.2 LTS as the OS. I have this running on a home lab ESXi server 8, on a HP Compaq Pro 6300 SFF CPU = Intel Core i7-3770 Installed Memory 16 GB I have some 10K SAS drives installed for the VM's If you have not already, navigate to  Get Ubuntu Server | Download | Ubuntu and download the 22.04.2 LTS ISO Next Lets create our VM that we are going to run this in. *Note Im using ESXi however you can probably do this in Workstation, Fusion, VirtualBox etc The main things to note on the VM creation. Set Ubuntu 64 bit as the guest OS Set your CPU relevant to the physicals CPU, I will be starting with 4 VCPU Set your Memory to as much as you can tolerate, I will be using 12 Disk Space - we are creating a 100G swap file, and the rest of the file can take up some room , so more is better if you can afford it Dont forget to add the U

Hyper-V clustering on 2008 R2 Server Core

So i thought id put in some notes on configuring a Hyper-V Cluster on 2008 R2 SP1 core.
in my scenario i an using 4 nics team for Hyper-V data, 1 nic for management , one for cluster traffic and one for live migration data.
I am also using 8GB FC
clusters consists of 5 nodes (removing the need for a quorum drive)
This is only connected to 1 VLAN , if you need mutiple VLANS, create them before creating your cluster as it will be easier.

one tool you will need in this setup is nvspbind.exe, this allows for the configuration of network services tied to the adapters.

first you will want to install all your cluster servers exactly the same with the same hardware etc.
Then install the OS (im using 2008 R2 Datacenter) and select the Core installation
once installed loginto the server , you will get a command prompt
install the SNMP feature
Dism /online /enable-feature /featurename:SNMP-S


Enable Remote Administration of Firewall (optional)
netsh advfirewall set currentprofile settings remotemanagement enable


now install .NET
DISM /Online /Enable-Feature /FeatureName:NetFx2-ServerCore

and finally install Powershell
DISM /Online /Enable-Feature /FeatureName:MicrosoftWindowsPowerShell

once that is complete type "SCONFIG" and hit [ENTER]
use option 2 to change computer name > reboot >
login type "SCONFIG"  join domain (option 1)> reboot >
enable RDP with option 7
setup network settings with option 8
install all of the os patches, this takes the longest with option 6, select (A)ll updates
do this several times untill all updates are installed
Now i leave my updates on manual, you don't want your cluster nodes randomly rebooting in the middle of the night, that is BAD.

Ok so you now have all your updates installed
you need to identify what adapters your going to team. the easiest way is to physically unplug them.
when you have identified them run this .bat file against the network adapters to remove all services. save as TEAM.BAT



set /p NICNAME=" Adapter to prepare for team"
nvspbind.exe /d "%NICNAME%" ms_msclient
nvspbind.exe /d "%NICNAME%" ms_netbios
nvspbind.exe /d "%NICNAME%" ms_tcpip6
nvspbind.exe /d "%NICNAME%" ms_lltdio
nvspbind.exe /d "%NICNAME%" ms_rspndr
nvspbind.exe /d "%NICNAME%" ms_netbt
nvspbind.exe /d "%NICNAME%" ms_pacer
nvspbind.exe /d "%NICNAME%" ms_server
nvspbind.exe /d "%NICNAME%" ms_smb
nvspbind.exe /d "%NICNAME%" ms_tcpip



so if i wanted to do this on "Local Area Connection 1" run TEAM.BAT and then type in "Local Area Connection 1"
do this for every NIC that is to be teamed, DO NOT DO THIS ON NICS that will not be teamed, that is another script.



once that is completed on all 4 nics, on all the nodes disable the ports on the switch, configure them for 802.3ad Dynamic with Fault Tolerance (LCAP on Cisco switch).

Team the Nic's on the server (HPTEAM.CPL for HP servers) using 802.3ad , USE THE SAME TEAM NAME FOR TEAM ON SERVERS.

run team.bat on the new created teamed NIC,

enable team on switch verify everything works correctly.

create clus.cmd


nvspbind.exe /d "Cluster Interface" ms_msclient

nvspbind.exe /d "Cluster Interface" ms_server
nvspbind.exe /d "Cluster Interface" ms_netbios
nvspbind.exe /d "Cluster Interface" ms_tcpip6
nvspbind.exe /d "Cluster Interface" ms_lltdio
nvspbind.exe /d "Cluster Interface" ms_rspndr
nvspbind.exe /d "Cluster Interface" ms_netbt
nvspbind.exe /d "Cluster Interface" ms_pacer
nvspbind.exe /d "Cluster Interface" ms_smb


save file
Do this on all cluster nodes before continuing




Configure cluster adapter:
nvspbind.exe /n (to find adapter)
netsh interface ipv4 set address name="Local Area Connection #" source=static address=XXX.XXX.XXX.XXX mask=255.255.255.248
Netsh int set interface name="Local Area Connection #" newname="Cluster Interface"
Run clus.cmd



Configure CSV Interface
Netsh int set interface name="Local Area Connection #" newname="CSV"


Configure MGMT Interface
Netsh int set interface name="Local Area Connection #" newname="MGMT"


Type nvsbind.exe to verify changes
Reboot Server
Do this on all cluster nodes before continuing
Add Hyper-V Role

Dism /online /enable-feature /featurename:Microsoft-Hyper-V
will ask to reboot, say yes, server reboots at least twice



Configure Hyper-V Virtual Networks
Create Virtual Network "Hyper-V Data"
Connection Type: External ; "Hyper-V Data"
Uncheck "Allow management OS to share this adapter"
   Do this on all cluster nodes,
THE NAMES HAVE TO BE THE SAME, AND MUST BE DONE BEFORE ADDING TO CLUSTER.

Enable MultipathIO Feature(MPIO)

Dism /online /enable-feature:MultipathIo

MPIO controll pannel
C:\windows\system32\mpiocpl.exe





Install Cluster Services
          Dism /online /Enable-feature /featurename:FailoverCluster-Core


Configure CSV
at this point connect your fiber LUN to the first node in the cluster:
with DISKPART online the disk and format it






disconnect the LUN

connect LUN to cluster node 2
with DISKPART online the disk, do not format it, thats been done.

do the same as node 2 for the remaning nodes 1 at a time.

connect the lun to all nodes
run the cluster services creation wizard, use the pre-req checker
after creation, create CSV

change Firewall settings
Add cluster to VMM






Comments

  1. Bless your heart for sharing this! I literally scoured the internet in vain for it!

    ReplyDelete

Post a Comment

Popular posts from this blog

vSphere 7 - Specify a vmkernel adapter to use for NFS datastores Step By Step \ Walkthrough on a DVS

Horizon View 2-factor Authentication for Free! with Google Authenticator or others

PowerCLI Script creating Instant Clone Pools in Horizon 8 Walk through