Skip to content


Problems and resolutions

NAbox unreachable

If NAbox is not reachable after initial deployment, the first thing to check is the network configuration in the vApp (Configure > vApp Options > Properties)

A common mistake is to omit the netmask bit after the IP address, like a.b.c.d/m, this is the mandatory IP address format during configuration.

Adding 7-mode systems

Check that you enabled TLS with options tls.enable on. Also check the following options : httpd.admin.access, httpd.admin.hostsequiv.enable

Reset Network configuration

If you find yourself in a situation where NAbox has an incorrect network configuration and you cannot connect to it, you wille have to log into the VMware console and manually configure the network :

  1. Login to the console using the adminuser
  2. cd into /etc/systemd/network
  3. Edit with vim

Here is an example for a DHCP configuration :



Here is an example for a static IP :


  1. After saving the file, run systemctl restart systemd-networkd

General questions

Root access

At your own risk, for troubleshooting or customizing purpose, you can access the virtual appliance root shell using sudo after logging in as admin through ssh

Reset password

To reset admin password, you need to use vSphere console to the VM and interrupt normal boot.

  1. When the following boot loader appears, type e key to edit Flatcar default Bootloader Screenshot

  2. Add init=/bin/sh at the end of the line Bootloader parameter screenshot

  3. Type F10 to boot

  4. Once the prompt is available, change admin password

sh-5.2# passwd admin
New password:
Retype new password:
passwd: password updated successfully
  1. Commit changes to disk
# sync
  1. Reboot the VM

Managing metrics

Delete data

Metrics can be deleted using API calls to Victoria Metrics. Note that un-authenticated acces to Victoria Metrics is disabled by default and can be enabled in NAbox preferences.

  1. Delete metrics for a given cluster

    # Delete metrics for cluster2
    curl -k -s \
    # Purge data from disk
    curl -s 'https://nabox_ip/vm/internal/force_merge'
  2. Delete metrics for a given volume

    # Delete metrics for volume `myvolume`
    curl -k -s \
    # Purge data from disk
    curl -s 'https://nabox_ip/vm/internal/force_merge'

Don't forget to turn back off guest access to metrics if you had to turn it on.

Delete old data

Removing older data from NAbox 4 is done by deleting directories in /data/victoria-metrics-data/data/big and /data/victoria-metrics-data/data/small after a graceful shutdown of Victoria Metrics.

  1. Stop Victoria Metrics service

    dc stop victoria-metrics
  2. Remove folders from data directories

    $ sudo bash
    # cd /data/victoria-metrics-data/data/small
    # ls -al
    total 28
    drwxr-xr-x.  7 root root 4096 Jul  1 00:00 .
    drwxr-xr-x.  4 root root 4096 Apr 28 20:38 ..
    drwxr-xr-x. 30 root root 4096 May  1 00:00 2024_04
    drwxr-xr-x. 52 root root 4096 Jun  1 00:00 2024_05
    drwxr-xr-x. 36 root root 4096 Jul  1 00:00 2024_06
    drwxr-xr-x. 39 root root 4096 Jul  9 10:41 2024_07
    drwxr-xr-x.  2 root root 4096 Apr 28 20:38 snapshots
    # rm -rf 2024_04
    # cd ../big/
    # ls -al
    total 28
    drwxr-xr-x. 7 root root 4096 Jul  1 00:00 .
    drwxr-xr-x. 4 root root 4096 Apr 28 20:38 ..
    drwxr-xr-x. 5 root root 4096 Apr 30 20:47 2024_04
    drwxr-xr-x. 5 root root 4096 May 28 03:31 2024_05
    drwxr-xr-x. 4 root root 4096 Jun 23 22:30 2024_06
    drwxr-xr-x. 2 root root 4096 Jul  1 00:00 2024_07
    drwxr-xr-x. 2 root root 4096 Apr 28 20:38 snapshots
    # rm -rf 2024_04
  3. Start Victoria Metrics service

    dc start victoria-metrics

Change default retention

If you find that 2 years of data retention is not appropriate in your environment, you can override the default by editing /etc/nabox/.env.custom and add/change the following line


After the change, you have to restart services :

# dc up -d

Customize Harvest

NAbox lets you customize harvest templates as described in Harvest docs.

Customization must be done in CLI in the /etc/nabox/harvest/user folder.

/etc/nabox/harvest/user                       # User customization directory
├─ /zapi                                      # Collector specific customization directory
│  ├─ /cdot/9.8/                              # Version aware template directory
│  │  └─ custom_my_ignore_list_template.yaml  # Template file
│  └─ custom.yaml                             # Customization file
└─ /zapiperf
   ├─ /cdot/9.8/
   │  └─custom_my_ignore_list_template.yaml
   └─ custom.yaml

Extending templates is only supported by the Zapi / ZapiPerf collectors

The Harvest REST collector only supports replacing one of the builtin templates with one of your own.

The REST collector does not support template extending, so if you choose to prefer Rest over Zapi in Preferences make sure that you do your customization based on the templates in /data/packages/harvest/conf/rest/9.x.0/ (most template files reside in the 9.12.0 directory).

Common practice is to copy the needed file(s) and do the necessary modifications, it will override the default Harvest file for the object definition.

Note about built-in customizations

NAbox defines its own customizations in /etc/nabox/harvest/nabox.

Symbolic links are created pointing to /etc/nabox/harvest/nabox.available/ directories.

├─ /nabox
│  ├─ /collect-workloads -> ../nabox.available/collect-workloads
│  └─ /exclude-transient-volumes -> ../nabox.available/exclude-transient-volumes
└─ /nabox.available/
   ├─ /collect-workloads
   └─ /exclude-transient-volumes

exclude-transient-volumes is always enabled, while collect-workloads is only enabled according to user preferences in NAbox web ui.

During start up, NAbox merges all customizations into /etc/nabox/harvest/active

├─ /zapiperf
│  ├─ /custom.yaml
│  └─ /cdot/9.8.0
│     ├─/custom_my_ignore_list_template.yaml
│     ├─/exclude_transient_volumes.yaml
└─ /zapi
   ├─ /custom.yaml
   └─ /cdot/9.8.0

zapiperf/custom.yaml content :

    Volume: exclude_transient_volumes.yaml,custom_my_ignore_list_template.yaml
    Workload: workload.yaml,exclude_transient_volumes.yaml,custom_my_ignore_list_template.yaml
    WorkloadDetail: workload_detail.yaml,exclude_transient_volumes.yaml,custom_my_ignore_list_template.yaml
    WorkloadDetailVolume: workload_detail_volume.yaml,exclude_transient_volumes.yaml,custom_my_ignore_list_template.yaml
    WorkloadVolume: workload_volume.yaml,exclude_transient_volumes.yaml,custom_my_ignore_list_template.yaml

zapi/custom.yaml content :

    Volume: exclude_transient_volumes.yaml,custom_my_ignore_list_template.yaml

Example : exclude collection based on volume name

To ignore volumes that you don't want to be collected by Harvest performance and/or capacity pollers, you can follow these steps.

  1. Create /etc/nabox/harvest/user/zapiperf/custom.yaml

      Volume: custom_my_ignore_list_template.yaml
      Workload: custom_my_ignore_list_template.yaml
      WorkloadDetail: custom_my_ignore_list_template.yaml
      WorkloadDetailVolume: custom_my_ignore_list_template.yaml
      WorkloadVolume: custom_my_ignore_list_template.yaml
  2. Create /etc/nabox/harvest/user/zapiperf/cdot/9.8.0/custom_my_ignore_list_template.yaml

          - volume `Test_volume.*`
          - volume `Temp_volume.*`
  3. We need to do the same for Zapi poller. Create /etc/nabox/harvest/user/zapi/custom.yaml

      Volume: custom_my_ignore_list_template.yaml
  4. Create /etc/nabox/harvest/user/zapi/cdot/9.8.0/custom_my_ignore_list_template.yaml

          - volume `Test_volume.*`
          - volume `Temp_volume.*`
  5. Restart Harvest

    dc restart havrest