Bare-Metal MAYANode: Comprehensive Guide
The ‘paint by numbers’ n00b guide to creating a bare-metal MAYANode. For members of the Maya Protocol community, it covers hardware through to troubleshooting.
This instructional guide aims to serve as a 'paint by numbers' approach for members of the Maya Protocol community who are wishing to run a Bare-Metal (BM) MAYANode (node). This guide will include every step from hardware design through to troubleshooting of an active node (and everything in between). Maya Protocol Docs does a good job at explaining the basics about running a node and this guide is essentially a carbon copy of the THORNode Bare-Metal Guide:

This comprehensive guide does not replace the node operator's onus to have a baseline knowledge in Linux, CLI, Kubernetes, DevOps and other concepts that are required.
It must be said, running a Maya Protocol MAYANode is no trivial endeavour. The stakes are VERY high and there is little to no recourse for catastrophic errors, mistakes, hacks, or vulnerabilities.
As always, running commands that you found on the internet is not ideal… you should proceed with skepticism and caution. Fact check, confirm, verify, independently validate and ultimately accept responsibility for your own node. THORChain is not liable for mistakes in the guide, or mistakes you make… it is your node and you are 100% responsible for it!
This guide is a living document that is periodically kept up to date by various node operator feedback. If you become a node operator, and have ideas on how the guide can be improved, pay it forward and let the community know.
Hardware
This is a list of the components that we elected to use for our BM node:
- CPU: AMD Ryzen 9 7950X (16core/32thread)
- CPU Cooler: Deepcool AG620 Multi Socket CPU Cooler
- Motherboard: Gigabyte B650 Gaming X AX AM5 ATX
- Memory: 2 x Corsair 64GB (2x32GB) CMK64GX5M2B5600Z40 Vengeance CL40 x2 (128GB Memory)
- Storage: 2 x Silicon Power 4TB XS70 PCIe Gen4 M.2 NVMe SSD (8TB NVMe SSD Storage). You can future-proof it with 4 x 4TB but adding more storage later is easy (assuming your Motherboard has slots for it)
+
1 x Silicon Power 500GB SATA/M.2 SSD (Boot/OS Drive) - Power Supply: Corsair RM850e 850W 80+ Gold Fully Modular Power Supply
- Case: SilverStone Fara R1 V2 Tempered Glass ATX Case — Black
140mm Fans x2. Admittedly, the setup is somewhat loud so perhaps a quieter case could be explored. It is fine for the office (not a living space) but noticeable for sure. - Internet: 1 GBPS connection (1000MBPS up/ 50MBPS down) with a prosumer ASUS router. More n00b advice, connect it via a LAN cable or you will rob yourself of speed via a Wi-Fi connection.
- UPS: Unless your router/modem are on the same router it is not so relevant. We went a small 1100VA/660W UPS to protect the hardware against surges and small power outages (<20 mins); more for equipment protection than up-time. Our power grid is ultra-stable, we have solar and can stomach small periods of downtime. UPS will only keep your node alive if your internet device is connected too.

The main focus is to get an AMD CPU with enough core/threads to cover the resources you require (see below) and to get the best storage that you can find (definitely M.2 NVMe SSD or even U.3 Enterprise Drives).
You can source the parts to build it yourself but your local supplier or computer builder will likely be able to do a better job than you (with none of the frustrations or headaches) for ~$150USD. We just paid the money to have the hardware built for us; highly recommended.
Required CPU Resources
“Can you run 2 MAYANodes on this setup?”
The answer is no. The limiting factor is the CPU resource requirements that are required by the MAYANode and services. Here is the summary (measuring in CPU threads; correct as of 23 August 2024) that can be found in the values.yaml for each service:
- Ethereum Daemon — 2*
- Bitcoin Daemon — 1*
- THORNode Daemon — 4
- Bifrost Daemon — 7
- Dash Daemon — 1*
- MAYANode Daemon — 4
- Kuji Daemon — 4*
- Arbitrum Daemon — 1.5*
- Radix Daemon — 4*
- Gateway — 0.2
TOTAL: 30 CPU threads
* ~ Can be shared across multiple BM nodes.
D5 Sammy successfully runs 7 BM nodes (THORNodes and MAYANodes) on his setup but he does have 128 CPU threads. He was clever enough to share the services that are capable of being shared (see the asterisked daemons in the list above). At minimum, each full BM node needs to run 30 threads and each next node needs 16 threads (the un-asterisked daemons above).
With the knowledge of how to calculate the minimum CPU requirements (and avoiding the temptation to simply lower the resource requirements), you will be able to work out the minimum resources that you need to run multiple nodes.
General Advice before software setup
Here is an unordered list of advice before starting to setup the software side of your MAYANode:
- Read and re-read all of the guide. Seriously, just print it out, highlight all the sections that are relevant/actionable and write notes for all of the questions that you have. You will resolve each question eventually so you might as well have them written down.
- ChatGPT is amazing to help understand what specific commands are trying to achieve. This is a good starting place if you honestly do not know where to start. Once you grasp the concepts of what is being asked, you can look at other online learning resources to understand it more deeply.
- Join the Maya Protocol Discord and ask advice in the #Liquidity-nodes channel. The members in this group are exceptionally generous with their time and knowledge but they will not hold your hand through it all or spoon-feed you basic concepts you can easily learn by reading above in the chat or elsewhere online.
- SSH into your BM node vice using an attached Keyboard and Mouse. It might be obvious to some but it became blatantly obvious when setting up the WireGuard proxy (lots of copy and paste)!
- Physical security is key. Do not tell anyone IRL about your node and certainly hide behind an anonymous online account. Take all other sensible precautions as if you had a multi-million dollar pile of gold sitting in your living room.
- Make sure that you are comfortable with all of the aspects of running a node before you actually bond to the network. Except for the cost of the hardware (and maybe the opportunity cost of not being churn into the network), you are not losing any money by taking your time. Build the BM node up, add a small amount of $CACAO and run all commands up to sending the full bond (eg:
make set-versionmake set-node-keysetc). After you are comfortable with the building of the BM node, practice maintaining it (make update) and even practice pulling it all back down. You can do this via Leaving/Destroying or you can just usemake recycle(more on this later).
Operating System Setup
Concept: You have a brand new BM rig with the base hardware specifications that were suggested above. Now is the stage to install the Linux Ubuntu Server Operating System (OS) which has no graphic interface (it is all CLI). You will setup and configure from the official Ubuntu Server Guide, then use SSH to remotely access the BM rig and finalise laying the foundation for your BM node.
OS Installation:
- Download and install the latest Ubuntu Server LTS (eg: 22.04.02 LTS) on a bootable USB drive (using etcher).
- Plug-in the bootable USB and restart your BM computer.
- Select 'Install', update Installer (if you desire), select Base (Ubuntu Server), choose language, select keyboard layout (google what yours is called).
- Proxy=blank (by default; Enter through it), Mirror=default (Enter through it), Choose the correct drive (500GB OS/boot drive; not 4TB options), deselect LVM (we will do it later), 'Install Ubuntu' and accept default networking.
- Select 'Use an Entire Disk' for storage and ensure you select the 500GB drive. Accept default partitions and confirm changes (it is a new blank drive so no risk of losing important data).
- Choose preferred naming conventions (plus a long, unique and secure password). This creates a new profile for you (non-root).
- Skip Ubuntu Pro option, Install OpenSSH (so that you enable enable the SSH service) and choose extra packages to install (we did not select any extra beyond the OpenSSH earlier).
- Finalise installation and reboot (it will prompt you to remove the USB drive and reboot again).
- Install and upgrade the latest versions of everything on the new OS:
sudo apt update
sudo apt upgrade
('y' to accept the extra storage required)Note: If prompted for merging, select the first option, “install the package maintainer’s version”. Accept default restart requests.

CPU Optimisation:
As you are running a single tenant hardware node, with a known code base, you are able to optimise the CPU for better raw performance.
Reference: https://sleeplessbeastie.eu/2020/03/27/how-to-disable-mitigations-for-cpu-vulnerabilities/
lscpu
sudo nano /etc/default/grubAdd mitigations=off to GRUB_CMDLINE_LINUX_DEFAULT
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash mitigations=off"CTRL+X (to exit), Y (to save the changes), Enter (keep the same name)Reload the file:
sudo update-grubNow reboot and confirm mitigations are off before continuing.
sudo reboot
lscpuNow you can turn your attentions to the 'swappiness'. It is likely set high (60) but can be reduced with a high RAM server:
Edit /etc/sysctl.conf as root
sudo nano /etc/sysctl.confAdd the following line to the bottom of the file (choose 10 or 20; user preference):
vm.swappiness = 20
CTRL+X (to exit), Y (to save the changes), Enter (keep the same name)Reload sysctl.conf:
sudo sysctl -pBIOS Settings
Edit the BM server BIOS settings (google how to do it on your specific motherboard) to ensure it reboots after any power loss. This ensures your machine spends minimal time offline after transient power outages (your UPS should help with this too).
Remote Access (SSH)
Setup remote access (SSH) to your new BM server; you will definitely need this during the WireGuard setup (so that you can copy and paste). SSH is installed as standard as part of the Ubuntu Server package.
To confirm it is running:
sudo systemctl status sshdOn your laptop, generate a new SSH key pair for exclusive use with your BM server; choose ED25519 as it is the newer and superior encryption.
[your laptop]
ssh-keygen -t ed25519
(give it a strong password for another layer of security)
ls ~/.ssh/
(you will now see id_ed25519 (Private Key) and id_ed25519.pub (Public key))Server Setup
Initial server setup is pretty standard across all Linux platform, follow this DigitalOcean guide as a baseline. It is recommended that the BM server get a reserved IP from the LAN DHCP server (google how to do this on your specific router). Also, you may need to allow Port Forwarding for Port 22 (required for SSH) on your router if you want to use your Public IP address to get SSH access (connecting from outside your network).
<your IP address> might be your ISP provided IP address or the reserved IP from your local network. If you are only going to connect from your local network you can use your local internal facing IP address (192.168.X.Y), however if you are going to connect externally you need to set this to your ISP provided public IP address (google 'What is my IP' if you don't know it).
Confirm SSH access to your BM server:
[From laptop]
ssh <username>@<your IP address>
Say 'yes' to ECDSA fingerprint.
(it will ask for your username password to access)
exitGive <username> your SSH public key:
[From laptop]
ssh-copy-id -i ~/.ssh/id_ed25519.pub <username>@<your IP address>
(will need the password again to copy it across)Disable Root login and Password logins.
sudo nano /etc/ssh/sshd_config
PasswordAuthentication no
(Change 'yes' to 'no'; remove the #)
PermitRootLogin no
(Will need to remove the #)
CTRL+X (to exit), Y (to save), Enter (keep the same name)
service sshd restart
exitConfigure your local machine (laptop) to auto-login using <shortcut alias>(eg replace with mayanode-demo or whatever shortcut alias you want):
[your laptop]
nano ~/.ssh/config
// Add the following lines to the config file:
host <shortcut alias>
user <username>
hostname <your IP address>
[your laptop]
ssh <shortcut alias>
(You will need your 'id_ed25519' password to complete the login)Storage Configuration (LVM)
Concept: Fundamentally what you are doing is starting at the bottom of the picture below (with your independent hard-drives; 2 x XS70 4TB M.2 NVMe SSDs in our case) and working your way up towards the top of the picture (where you will have a large Volume Group that pools all the storage together).
Read all instructions available at https://christitus.com/lvm-guide/ and watch his video (a few times if required). Other useful videos are Learn Linux TV and David Dalton.

Install lvm:
sudo apt install lvm2
sudo apt install btrfs-progsCheck your current disk space usage:
df -hCheck partitions and volumes. The physical hard drives (M.2 NVMe SSDs) will be listed something like nvme0n1 and the partition of this physical drive will be something like nvme0n1p1 (if present). Currently only the boot drive is partitioned.
sudo lsblkAs your new M.2 SSDs are likely not yet partitioned so you will need to make a 100% partition on each before continuing (btrfs). The previously run sudo lsblk will have confirmed whether there is an existing partition available and if not it would at least inform you of the <identifier> for the physical hard drive you are partitioning.
sudo fdisk /dev/<identifier>
m (to see the full guide)
g (new GPT partition table)
n (add new partition)
Enter, Enter, Enter (all default values)
w (write and save)
sudo lsblk
(you will now see nmve0n1p1; the new partition)
sudo mkfs.btrfs /dev/<identifier of new partition>
df -h (to check all physical hard drives have partitions)Repeat the step above for each of your M.2 NVMe SSDs; each needs their own partition.
Looking back at the LVM graphic (above), you will see we have our actual hard drives, each with a partition, and now we are ensuring that each partition has a Physical Volume. Check for existing Physical Volumes:
sudo pvscanType df -h to note the file system of the second hard drive (if has already been mounted) If your hard drives have not been mounted, use sudo lsblk. It should be something like /dev/nvme0n1p1.
Warning: Creating a Physical Volume (PV) will wipe all data on it. Make sure you select the correct partition!
// Repeat the following step for each partition
sudo pvcreate /dev/nvme0n1p1
('y' to wipe it; nothing is there since it is new)Working up the LVM graphic, we can now add all of the Physical Volumes into a single Volume Group. Check for existing Volumes Groups (VG):
sudo vgscanAs we are setting up LVM for the first time, we will need to create a new VG. nvmevg0 is the name being create and it is using /dev/nvme0n1p1, /dev/nvme1n1p1 and /dev/nvme2n1p1.
sudo vgcreate nvmevg0 /dev/nvme0n1p1 /dev/nvme1n1p1 /dev/nvme2n1p1To make it easier to understand, we will check what we created and how much data it has to allocate:
sudo vgdisplayFinally we want to add the VG to a Logical Volume (LV). We can check for any existing LVs but it will likely return as empty (there are no LVs):
sudo lvscanAgain, since it is a new setup, there is no LV. Time to create a LV:
sudo lvcreate -l 100%FREE nvmevg0 -n nvmelv0There is an option to create a striped logical volume with sudo lvcreate -l {value} -i 2 -I 128k nvmevg0 -n nvmelv0 but striping makes storage expansion (adding more M.2 storage) impossible. Only run this command if you have maxed out your storage and do not ever plan to use an expansion card.
{value}— The value of data that you want to use (100%FREEwill use all available space)-i 4— Number of stripes (the numbers of disks needed to be used for stripes; ie: 4 x 4TB)-I 128k— Size of a single stripe
Final scan to see the newly created and format LVM setup:
sudo lvscanNow that we have an “ACTIVE” LV (/dev/nvmevg0/nvmelv0) to use as a combined storage, we will need to mount it and ensure it remains mounted automatically:
// Format the LV into btrfs
sudo mkfs.btrfs /dev/nvmevg0/nvmelv0
// Find the UUID (copy the UUID for use shortly)
sudo blkid /dev/nvmevg0/nvmelv0
// Create where you want to mount the LV (<name> is up to you eg: /data)
sudo mkdir /<name>
// Open /fstab to add the LV (to remain mounted automatically)
sudo nano /etc/fstab
//Add the following line (<value> is the UUID from the blkid command):
UUID=<value> btrfs /data defaults 0 0
(exit nano; CTRL+X, 'y', enter)
// Mount the LV
sudo mount -aNow you can do a final confirmation that the storage is setup correctly:
df - hKubernetes Setup
Concept: Your BM server has now been configured to a point that it has a solid foundation to run applications and services. The next step is to configure the server so that it is capable of running a MAYANode. To do this, Kubernetes needs to be install, configured and optimised for MAYNode operations. This guide will use k8s (MicroK8s) as the Kubernetes client.
Microk8s
Install microk8s:
# Check for the latest version of microk8s (1.28 in this example)
sudo snap install microk8s --classic --channel=1.28/stableConfirm installation:
sudo microk8s status
sudo microk8s kubectl get nodesEnable add-ons (dns, hostpath-storage, metrics-server):
sudo microk8s enable dns hostpath-storage metrics-server metallbdns — is required for hostnames between pods, it is important that this is installed before the first namespace/pod is created on the server.
hostpath-storage — is required to put pod storage on host NVMe.
metrics-server — is required to run kubectl top node.
metallb — is required to assign IP to specific nodes.
metallb will prompt to enter an IP range for MetalLB, enter any place-holder IP to be replaced later (1.2.3.4/32).
Kube Environment Configuration
This configuration allows the kube client and tools (such as k9s) to interact with our kube node/cluster.
mkdir ~/.kube
// Export Kube Config
sudo microk8s config > ./.kube/config
// Edit bashrc
nano .bashrcAdd the following content in .bashrc and save file:
# Config for K9s
export KUBECONFIG=~/.kube/config
# Use nano instead of vi as default editor
export KUBE_EDITOR="nano"
# Autocomplete kubectl
source <(kubectl completion bash)// Reload bashrc to apply the changes to the current session
source .bashrcInstall kubetcl:
sudo snap install kubectl --classicInstall k9s Console
k9s is a simple and easy to use console to monitor/interact with the pods of our THORNode cluster (sudo snap install k9s does not work).
// Go to Home Directory
cd
// Download k9s from their GitHub (check for latest version)
wget https://github.com/derailed/k9s/releases/download/v0.27.4/k9s_Linux_amd64.tar.gz
// Extract the downloaded k9s file
tar -xvzf k9s_Linux_amd64.tar.gz
// Move k9s to Binary folder
sudo mv k9s /bin
// Clean Up the unneccessary extra files
rm LICENSE README.md k9s_Linux_amd64.tar.gz
// Open k9s to explore
k9s
// To close k9s
Ctrl+CFor troubleshooting:
// Get Environment Info (Config and Log files location)
k9s infoNote: “ERR refine failed error=”Invalid kubeconfig context detected” indicates that the KUBECONFIG variable was not found.
Setup StorageClass:
StorageClass indicate where Microk8s will store pods storage on our BM server; this is our newly created LVM storage.
This is not required when adding anew kube node to a cluster.
Prepare StorageClass object for NVMe Raid:
cd ~
mkdir mk8sconfig
nano mk8sconfig/nvme-hostpath-sc.yamlCopy the following content into the nvme-hostpath-sc.yaml:
# nvme-hostpath-sc.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: nvme-hostpath
provisioner: microk8s.io/hostpath
reclaimPolicy: Delete
parameters:
pvDir: /data
volumeBindingMode: WaitForFirstConsumerpvDir:/dataCreate StorageClass:
kubectl apply -f mk8sconfig/nvme-hostpath-sc.yaml
// Confirm creation of new StorageClass
kubectl get storageclassSet Default StorageClass:
// Display StorageClass
kubectl get storageclass
// Set Default
kubectl patch storageclass microk8s-hostpath -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'
kubectl patch storageclass nvme-hostpath -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
// Confirm the new Default
kubectl get storageclassNote: If using a multi-node kube cluster, the same storageClass will be used for each node in the cluster. They will all refer to the same path (ie: /data4) which will be interpreted locally (each to their own /data4 folder) on the server a new pod is created. It is important that each server of the cluster uses the same path for their data folder.
WireGuard Proxy
Concept: Holistically what you are trying to achieve is to setup a new Virtual Machine (VM), via a cloud provider (like DO or AWS etc), to act as the proxy for your BM node which will be connected together via a WireGuard (VPN) tunnel. This will use the randomly assigned VM static IP in order to hide your actual IP and hence hide your physical location behind the proxy. Incoming traffic will be routed from the VM proxy to the BM node but the node will use your local internet connection to download/sync the chain-daemons etc; this is why it is important to have 1GPBS internet for the BM server.
D5 Sammy's guide is great if you know what you are doing but we found the linked DO WireGuard guide to be (unsurprisingly) more comprehensive. To remove as much potential for user error, we stuck with the variables provided by the guide (eg: wg0 and 10.8.0.1/24 etc). Once we confirmed the service was working, we changed the variables to something less obvious and more unique to our connection (always keep 'em guessing!).
As a very general rule, pay very close attention to the exact IP being asked of you by the guide (eg: 1.8.0.0 or 1.8.0.1 or 1.8.0.2 etc in the wg0.conf). This might feel obvious but we misplaced a single, final digit (putting .1 instead of .0) and it took a lot of focus and frustration to find and fix it. Further more, the subnet (ie: 1.8.0.1/24 or 1.8.0.1/32) is important, ensure you allocate the correct subnet while setting up WireGuard.
Don't run sudo systemctl <command> [email protected] (commands being enable, start, status, restart, reload etc) until the very end of the WireGuard setup. Until you have everything finished and working, just use sudo wg-quick up wg0 (to bring up the service) or sudo wg-quick down wg0 (to bring down the service). This will allow you to keep editing and changing the wg0.conf file (while the network connection is down) without having to reload/restart the service. Once you have a good connection (which you can see by running sudo wg and witnessing the “Latest handshake” and matching the “transfer” between the Proxy and the BM) you can bring the service down, update all of the variables slowly/deliberately and then bring up and test the connection again. If it is working as advertised then you can run the 3 sudo systemctl <command> [email protected] commands and have it automated.
VM Proxy Setup
Once you have created and setup a basic VM (the cheapest is fine; 512MB memory, 10GB SSD storage and 1 vCPU = $4USD/month), update the OS:
sudo apt update
sudo apt upgradeNote: If prompted for merging, select the first option, “install the package maintainer’s version”.
Install Networking and Monitoring Tools:
sudo apt install nmap net-tools iperf3 speedtest-clinmap— Network Mappernet-tools— Controlling network subsystemiperf3— Used to test latency between Proxy and BMspeedtest-cli— Test the Up/Down speed of the Proxy and/or BM
Install WireGuard:
sudo apt install wireguardRestart to reload services and new kernel if required:
sudo rebootGenerate WireGuard Private and Public KeyPairs:
// Generate KeyPair for WireGuard Server (The VM Proxy acting as the WireGuard Server)
wg genkey | sudo tee /etc/wireguard/wg0server.key
sudo chmod go= /etc/wireguard/wg0server.key
sudo cat /etc/wireguard/wg0server.key | wg pubkey | sudo tee /etc/wireguard/wg0server.pub
// Generate KeyPair for THORChain Node (The BM server acting as the WireGuard Client)
wg genkey | sudo tee /etc/wireguard/wg0node.key
sudo chmod go= /etc/wireguard/wg0node.key
sudo cat /etc/wireguard/wg0node.key | wg pubkey | sudo tee /etc/wireguard/wg0node.pubNote
Create Config File for Server
sudo nano /etc/wireguard/wg0.confCopy the following contents into wg0.conf:
[Interface]
Address = 10.8.0.1/24
PrivateKey = <wg0server.key>
ListenPort = 51820
SaveConfig = false
PostUp = ufw route allow in on wg0 out on eth0
PostUp = iptables -t nat -I POSTROUTING -o eth0 -j MASQUERADE
PreUp = iptables -A FORWARD -i eth0 -o wg0 -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
PreUp = iptables -A FORWARD -i wg0 -o eth0 -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
PreUp = iptables -A FORWARD -i eth0 -o wg0 -p tcp --syn --match multiport --dports 6040,5040,27146,27147 -m conntrack --ctstate NEW -j ACCEPT
PreUp = iptables -t nat -A PREROUTING -i eth0 -p tcp --match multiport --dports 6040,5040,27146,27147 -j DNAT --to-destination 10.8.0.2
PreUp = iptables -t nat -A POSTROUTING -o wg0 -p tcp --match multiport --dports 6040,5040,27146,27147 -d 10.8.0.2 -j SNAT --to-source 10.8.0.1
PreDown = ufw route delete allow in on wg0 out on eth0
PreDown = iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE
PostDown = iptables -D FORWARD -i eth0 -o wg0 -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
PostDown = iptables -D FORWARD -i wg0 -o eth0 -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
PostDown = iptables -D FORWARD -i eth0 -o wg0 -p tcp --syn --match multiport --dports 6040,5040,27146,27147 -m conntrack --ctstate NEW -j ACCEPT
PostDown = iptables -t nat -D PREROUTING -i eth0 -p tcp --match multiport --dports 6040,5040,27146,27147 -j DNAT --to-destination 10.8.0.2
PostDown = iptables -t nat -D POSTROUTING -o wg0 -p tcp --match multiport --dports 6040,5040,27146,27147 -d 10.8.0.2 -j SNAT --to-source 10.8.0.1
# Node
[Peer]
PublicKey = <wg0node.pub>
AllowedIPs = 10.8.0.2/32Variables:
- Replace
<wg0server.key>and<wg0node.pub>with the long Alpha-Numeric keys provided before. - IPv4 Range should be randomly selected by yourself. Replace
10.8.0.xwith something of your choosing; refer to DigitalOcean Guide for the available ranges. eth0is the most common Ethernet Interface but yours may be labelled differently.
Enable IP Forwarding to ensure you BM server can use the VM proxy's public IP:
sudo nano /etc/sysctl.conf
// Unhash the following (to allow IPv4 forwarding)
net.ipv4.ip_forward=1
// Apply Changes
sudo sysctl -pBring up WireGuard manually:
sudo wg-quick up wg0Note
BM Server Setup
Back on the BM Server, configure WireGuard to connect to the VM Proxy that we just created.
Install WireGuard on the BM server:
sudo apt install wireguardInstall the WireGuard dependency:
sudo apt install openresolvCreate the wg0.conf file for the BM server:
sudo nano /etc/wireguard/wg0.confCopy the following contents into wg0.conf:
[Interface]
Address = 10.8.0.2/24
PrivateKey = <wg0node.key>
DNS = 9.9.9.9
SaveConfig = false
[Peer]
PublicKey = <wg0server.pub>
EndPoint = <Proxy Server Public IP>:51820
AllowedIPs = 10.8.0.0/24
PersistentKeepalive = 25Variables:
- Replace
<wg0node.key>and<wg0server.pub>with the long Alpha-Numeric keys provided before. - IPv4 Range must match those originally configured in the VM proxy.
<Proxy Service Public IP>is the Public IP for the VM proxy. The Proxy Service Public IP is the IP you use to access the VM (eg: ssh root@Public IP).
Bring up WireGuard manually:
sudo wg-quick up wg0Check for a successful connection. You will run this command both on the BM server and the VM proxy. Both should list a 'Latest Handshake' and the data transfer should match.
sudo wg showFrom your BM server, conduct a Ping Test to your VM proxy:
ping 10.8.0.1Now that you have established a connection, you can bring down the service and slowly change all the variables:
sudo wg-quick down wg0After you have changed and tested all of the components that you want, you can bring WireGuard back up and set as an automatic service. Start by running all of these command on the VM proxy and then do the BM server:
sudo systemctl enable [email protected]
sudo systemctl start [email protected]
sudo systemctl status [email protected]On both the VM proxy and the BM server, setup the firewall configuration (these must be done in this order):
sudo ufw reset
sudo ufw default allow incoming
sudo ufw default allow outgoing
sudo ufw allow in on eth0 to any port 22
sudo ufw allow in on eth0 to any port 5040
sudo ufw allow in on eth0 to any port 6040
sudo ufw allow in on eth0 to any port 27146
sudo ufw allow in on eth0 to any port 27147
sudo ufw allow in on eth0 to any port 51820
sudo ufw deny in on eth0
sudo ufw enable
sudo ufw status numberedNote: ufw being enabled is mandatory. PostUp = ufw route allow... will silently do nothing if ufw is disabled and this will break the forwarding rules. Your node will not pass the health checks (http://<PublicIP>:27147/health?).
Try connecting another SSH Session before closing the current one to confirm that you can still access the VM proxy and BM server after the firewall changes.
From a seperate computer, test that the Ports:
nmap -Pn -p 22,80,5040,6040,8080,26656,26657,27146,27147,51820 <ProxyPublicIP>After everything is configured the following ports should be Open: 22, 5040, 6040, 27146, 27147, 51820, everything else would be Filtered. If it does not report like this (the Ports may need an active service to report 'Open', you can check the individual Ports:
// On BM Server (replacing <Port> with 22, 5040, 6040. 27146, 27147 or 51820)
nc -l <Port>
// On the independent computer (<Proxy Public IP> being the forwarded proxy IP)
nc <Proxy Public IP> <Port>Multiple Nodes
Each BM node will require its own Static Public IP and hence its own VM proxy. All of the instructions are the same but the connections are managed with wg1.conf and wg2.conf that have their own unique variables (IPv4 range, Private/Public Key pairs and wg1 labelling).
Repeat the full guide to create a WireGuard Proxy for each Validator Node.
Notesudo wg-quick up wg1Configure MetalLb on BM server
Configure the WireGuard IP in metallb:
kubectl edit ipaddresspool default-addresspool --namespace=metallb-systemAdd the IP to the IP list:
spec:
addresses:
- 10.8.0.2/32Confirm Change
kubectl describe ipaddresspool default-addresspool --namespace=metallb-systemMultiple IPs can be added to the IP list; repeat for each VPN Tunnel.
Create and configure a MAYANode
Concept: All the previous steps were required to get to the current step of being able to clone the Maya Protocol Gitlab Repository and start to create your MAYANode. https://docs.mayaprotocol.com/dev-docs/mayanodes/overview still remains the reference document for this and all commands from here will be followed (with some small adjustments like not using make set-ip-address).
Install the MAYANode
For this setup we will create a distinct git working directory for each validator node ( n1=Node 1). Each validator will need it's own directory (n2, n3 etc).
Prepare git folder:
cd ~
git clone https://gitlab.com/mayachain/devops/node-launcher
cd node-launcher
git checkout master
git config pull.rebase true
git config rebase.autoStash trueNote: git configInstall tools:
make helm
make helm-plugins
make toolsNote: You will likely get a warning like this:
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /home/<Username>/.kube/config
# Update the permissions
chmod 600 /home/<Username>/.kube/config
# Verify the permissions
ls -l /home/<Username>/.kube/config
(Google/ChatGPT the response to ensure the permissions are correct)Verify all pods are healthy:
k9s// If you are not running k9s, this will also work:
kubectl get pods -ASet loadBalancerIP parameter for Gateway
These steps will force MetalLB to assign a specific IP to this validator. This is required for the node to receive from the proxy public IP (from the WireGuard setup).
nano gateway/templates/service.yamlAdd the following values in the section metadata (under annotations):
metadata:
annotations:
# MetalLb - WireGuard Setup
metallb.universe.tf/loadBalancerIPs: 10.8.0.2Set External Environment IP for bifrost
This will allow bifrost to broadcast the proxy public IP as the IP to be reached from the other Validators.
nano bifrost/templates/deployment.yamlHardcode the EXTERNAL_IP value in the section env section with the Public Proxy IP (<Proxy Public IP>) that was enabled with the WireGuard setup (# out the rest):
env:
- name: EXTERNAL_IP
value: "<Proxy Public IP>"
# valueFrom:
# configMapKeyRef:
# name: {{ include "bifrost.fullname" . }}-external-ip
# key: externalIPSet External Environment IP for THORNode
This will allow mayanode to broadcast the proxy public IP as the IP to be reached from other Validators.
nano mayanode/templates/deployment.yamlHardcode the EXTERNAL_IP value in the section env section with the Public Proxy IP (<Proxy Public IP>) that was enabled with the WireGuard setup (# out the rest):
env:
- name: EXTERNAL_IP
value: "<Proxy Public IP>"
# valueFrom:
# configMapKeyRef:
# name: {{ include "thornode.fullname" . }}-external-ip
# key: externalIPRun Make Install to create your MAYANode
NAME=mayanode TYPE=validator NET=mainnet make installConfirm Pods are starting correctly
k9s
// If you are not running k9s, this will also work:
kubectl get pods -ASync ETH Beacon Chain from Snapshot
The Ethereum blockchain is by far the slowest to spin up; taking 1–2 weeks. D5 Sammy has a spare ETH daemon running for redundancy but we elected to conserve the CPU thread resources and use the ETH Beacon Chain from snapshot:
When setting up fresh, after make install ETH beacon will start to sync. Find a trusted Ethereum Beacon Chain checkpoint sync endpoint. Ensure you select from “Mainnet” (eg: we chose https://mainnet-checkpoint-sync.attestant.io/).
cd node-launcher/ethereum-daemon
sudo nano ethereum-daemon/values.yamlInsert your chosen ETH Beacon Chain checkpoint into checkpoint_Url:
checkpoint_Url: "https://mainnet-checkpoint-sync.attestant.io/"
CTRL+X (to exit), Y (to save), Enter (keep the same name)Push the changes to the active ETH daemon:
NAME=mayanode TYPE=daemons NET=mainnet make install
make reset > EthereumVerify that the ETH Beacon chain is syncing from the snapshot:
make logs --> ETH Daemon --> Beacon ChainOnce it is fully sync'd you want to confirm that the Slot and State Root are correct. Compare the figures against an Ethereum Explorer of your choosing.
make verify-ethereumSync THORNode from Snapshot
The THORChain blockchain can take a fair amount of time to download. To save time, you can sync thornode from a nine-realms snapshot.
# This command requires jq:
sudo apt update
sudo apt install libxml2 libxml2-utils
NAME=mayanode TYPE=validator NET=mainnet make recover-ninerealms
// Select the Pruned
// Select the latest (Highest) Block available, there are multiple snapshotsConfirmation
Before you proceed, you must ensure that all of the chains are completely up-to-date and 100% sync'd.
NAME=mayanode TYPE=validator NET=mainnet make statusIt is also a good habit to check the latest block for each chain against the published latest block on an independent blockchain explorer.
Sync MAYANode from Snapshot
The MAYAChain blockchain can take a fair amount of time to download. To save time, you can sync mayanode from a Maya snapshot.
NAME=mayanode TYPE=validator NET=mainnet make recover-maya// Select the Pruned// Select the latest (Highest) Block available, there are multiple snapshotsConfirmation
Before you proceed, you must ensure that all of the chains are completely up-to-date and 100% sync'd.
NAME=mayanode TYPE=validator NET=mainnet make statusIt is also a good habit to check the latest block for each chain against the published latest block on an independent blockchain explorer.
(OPTIONAL) Pointing Bifrost to Shared Daemons
Concept: If you are running multiple MAYANodes and using shared daemons (not what this guide is setting up to do, more like the THORNode Guide) then you will need to point Bifrost at the correct location.
Before install of MAYANode:
# Assuming `m1` is your node
cd ~/m1/node-launcher
# Edit Daemons Configs and point bifrost to shared chain daemons
nano mayanode-stack/chaosnet.yamlChange the chaosnet config to install on the Chains that you need:
global:
tag: mainnet-1.110.0
hash: 15a7a6166c964dc04f7c54ee1a423f9202073f429c5f4c26331d3ee0c9269214
mayanode:
haltHeight:
statesync:
auto: false
snapshotInterval: 0
midgard:
blockstore:
enabled: false
remote: ""
size: 20Gi
bitcoin-daemon:
enabled: false
dash-daemon:
enabled: false
ethereum-daemon:
enabled: false
kuji-daemon:
enabled: false
thornode-daemon:
enabled: true
arbitrum-daemon:
enabled: false
binance-daemon:
enabled: false
litecoin-daemon:
enabled: false
bitcoin-cash-daemon:
enabled: false
dogecoin-daemon:
enabled: false
gaia-daemon:
enabled: false
avalanche-daemon:
enabled: false
radix-daemon:
enabled: false
cardano-daemon:
enabled: false
# Point bifrost at shared daemons
bifrost:
bitcoinDaemon:
mainnet: bitcoin-daemon.c1.svc.cluster.local:8332
ethereumDaemon:
mainnet: http://ethereum-daemon.c1.svc.cluster.local:8545
dashDaemon:
mainnet: dash-daemon.c1.svc.cluster.local:9998
kujiDaemon:
enabled: true
mainnet:
rpc: http://kuji-daemon.c1.svc.cluster.local:26657
grpc: kuji-daemon.c1.svc.cluster.local:9090
grpcTLS: false
arbitrumDaemon:
mainnet: http://arbitrum-daemon.c1.svc.cluster.local:8547
radixDaemon:
mainnet: http://radix-daemon.c1.svc.cluster.local:3333/coreConfigure MAYANode
Concept: docs.mayaprotocol.org details the full steps and can be followed exactly as described with the exception of the make set-ip-address command.
Confirm all chains are up-to-date:
NAME=mayanode TYPE=validator NET=mainnet make statusNote
Use Eldorado Wallet (or similar wallet with Custom Memo) to send in some $CACAO to your MAYANode for gas fees. The initial steps require $CACAO to pay the gas fees but the actual bond for MAYANodes come from the LP units.
Using Eldorado (or any wallet with Custom Memo option), bond in a small initial LP unit to whitelist your node. Make sure that you send it from the correct controlling wallet as this will become the admin wallet that is permanently attached to this MAYANode. Ensure you are using bond and not send.
Note:Asgardex Wallet
Custom Memo (replacing <Node Address> with your MAYANode address):BOND:THOR.RUNE:1:<Node Address>
Publicly publish your node keys:
make set-node-keysPublicly publish your node version:
make set-versionThis is the step that is different from the guide and is required when using a proxy IP (WireGuard setup). Replace <name> with the namespace you chose (eg: n1) and <Proxy Public IP> with the Public IP from the WireGuard Proxy:
kubectl exec -it -n <namespace> deploy/mayanode -- /kube-scripts/set-ip-address.sh "<Public Proxy IP>"Check that you are now “ready” and will be available to churn in on the next churn (assuming enough bond to win that churn's bond war):
NAME=mayanode TYPE=validator NET=mainnet make statusBond in your full bond amount (to that same MAYANode address) and wait to be churned in when you are competitive.
Custom Memo:
BOND:<Asset Pool>:<LP Unit Amount>:<Node Address>
BOND:BTC.BTC:500000000000000:maya10sy79jhw9hw9sqwdgu0k4mw4qawzl7czewzs47Troubleshooting
There are far too many issues that can occur in the process of setting up your BM node. Each of the previous guides (linked at the start) provide a good list (and explanation) of some common issues and ways to fix or troubleshoot them.
Read through the previous guides, especially D5 Sammy, for common advice and if still stuck then ask #Liquidity-nodes Discord Channel and we will attempt to update this guide with the most common issues. It is a good practice to search for keywords on the issue prior to asking as it might already by answered.
Personalisation (Optional)
Concept: All previous steps were mandatory for your BM node setup (except those specifically labelled as “Optional”) but the following steps are for personalisation. Of note, Scorch and Hildisvíni Óttar used oh-my-zsh (.zsh) but D5 Sammy used .bash; it is a personal preference (we chose .bash).
Edit .bashrc:
nano ~/.bashrc
// Add the following at the bottom (under your previous additions):
alias ms='NET=mainnet NAME=mayanode TYPE=validator make status'
alias n='cd ~/node-launcher'
//Reload .bashrc
source ~/.bashrcBacking up MAYANode
Concept: Backing up and securing of your BM node is largely dictated by the risk tolerance of the node operator. In saying this, the following are the minimum backup and security measures that must be followed by all BM node operators.
The first step to backing up your node is to physically secure the Mnemonic and Password. We prefer to save them in an ultra-secure storage facilities that is not easily accessible and at a different location to the BM node.
// Physically save your Mnemonic and secure it like you have your wallet seed-phrase
NAME=mayanode TYPE=validator NET=mainnet make mnemonic
// Physically save your Password and secure safely (similar to the mnemonic)
NAME=mayanode TYPE=validator NET=mainnet make passwordYour mayanode and bifrost backups will need to be secured digitally:
mkdir ~/BackupN1
// Generate a mayanode Backup (This is a backup of mayanode, it only needs to be done once)
NAME=mayanode TYPE=validator NET=mainnet SERVICE=mayanode make backup
// The backup function will display the path to the backup folder
cp ./backups/mayanode/2023-XX-XX/mayanode-16XXXXXXXX.tar.gz ~/BackupN1/
// Generate a bifrost Backup (This is a backup of the current bifrost, if migrating it must be since the last churn)
NAME=mayanode TYPE=validator NET=mainnet SERVICE=bifrost make backup
// The backup function will display the path to the backup folder
cp ./backups/bifrost/2023-XX-XX/bifrost-16XXXXXXXX.tar.gz ~/BackupN1/
// Save ~/BackupN1 offline; can send to laptop first to the store remotely.
[Your Laptop] scp -r <user>@<host of source node>:~/BackupN1 ~The combination of ~/BackupN1/ and the securely stored Mnemonic+Password is all we need to restore our Validator from scratch (if necessary).
Reminder:
MAYANode Maintenance
Concept: Getting your BM node up and running is only the beginning, there is still heaps of (near daily) work required to keep your BM node running correctly. This section will detail how to maintain, update, provide governance to the MP and other generic node operators actions that will be expected of you.
Useful MAYAode commands
// To see the full list of `make` commands
make help
// To debug or check on specific chain daemons
make logs
// If you want to keep all chain daemons and services but have a new mayanode
make recycle
// When a daemon/pod/service is struggling or stuck
make restart
(This just kills and restarts the pod; safe to use)
// When a daemon/service is corrupted or needs a fresh start
make reset
(This command is destructive and will wipe all data and start fresh; use cautiously)
// To provide governance on the Maya Protocol network
make mimir
(You will be voting on something and need the mimir Key and Value)
// To anonymously ask questions in #mainnet or #devops
make relay
// To pause the global Maya Protocol network for 1hr (suspected nefarious behaviour)
make pause
// To resume the global Maya Protocol network after a 'make pause' (all clear)
make resumeUpdating Validators
If you are only running a single BM node then this step will cover everything required.
cd ~/node-launcher
git checkout master
git pull --rebase --autostash
NAME=mayanode TYPE=validator NET=mainnet make updateRebooting the BM Server
Scaling down Pods prior to rebooting server can help prevent chain corruptions.
// Scale down all Pods
kubectl -n mayanode scale deployments --replicas=0 --all
// Wait for all pods to terminate completely
k9s
sudo shutdown -h nowAfter you boot-up the BM server, the Pods will need to be scaled back up.
// Scale up all Pods
kubectl -n mayanode scale deployments --replicas=1 --all
// Monitor to see they all come back online correctly
k9s
// Complete a final check of the node
n (this is if you added the alias; else, cd ~/node-launcher)
ms (this is if you added the alias; else, make status)Monitoring
The best way to monitor your BM node is via SSH and make status but you can also keep an eye on it while out and about at https://www.mayascan.org/network or https://mayanode.network/.
Conclusion
This guide was our best attempt to provide a comprehensive 'paint-by-numbers' n00b guide for making a MAYANode BM node. Yes, the required base knowledge is high. Yes, it is a lot harder than centralised cloud provider or participating in a pooled-node. But, no, it is not an insurmountable endeavour. The monthly $$$ savings along are reason enough to go BM but the added decentralisation to Maya Protocol and control over your MAYANode are added benefits too!
Our BM nodes would not have been possible if it were not for the insanely generous community members like D5 Sammy, Scorch and Hildisvíni Óttar (plus all those on the #Liquidity-nodes Discord Channel). Everyone has been extremely helpful and any thanks should be directed towards them. We are just trying to do our best to pay it forward.
Any further questions or queries should be fielded in Discord and this guide will be updated. Good luck!
