Skip to content
1006

NAbox system layout

Summary

This KB explains the layout of NAbox file system and where to find the different configurations. It is aimed at advanced users that want to hack or customize NAbox to their specific needs.

Don't hesitate to share your changes through help channels so it can be considered for future NAbox improvements.

Introduction

NAbox is based on Flatcar Linux which is a stateless Linux distribution. It is an important characteristic of Flatcar, that makes upgrading the system much easier than with package based distributions. It also means that most of the system is read-only, which improves security but can present a challenge for customization.

NAbox disks

There are two disk devices presented to NAbox VM :

  • The system disk which partition layout is described here
  • The data disk which is an unpartitioned thin provisioned 200GB disk.

As of NAbox 4.1, this is the partition layout for the system disk :

Number Label Description Partition Type
1 EFI-SYSTEM Contains the bootloader FAT32
2 BIOS-BOOT Contains the second stages of GRUB for use when booting from BIOS grub core.img
3 USR-A One of two active/passive partitions holding Flatcar Container Linux EXT2
4 USR-B One of two active/passive partitions holding Flatcar Container Linux (empty on first boot)
5 ROOT-C This partition is reserved for future use (none)
6 OEM Stores configuration data specific to an OEM platform BTRFS
7 OEM-CONFIG Optional storage for an OEM (defined by OEM)
8 (unused) This partition is reserved for future use (none)
9 ROOT Stateful partition for storing persistent data EXT4, BTRFS, or XFS

The data disk /dev/sdb is mounted on /data and contains the following folders :

Folder Description
grafana Grafana container data
packages Application packages, currently only harvest subdirectory
uploads Uploads, like system upgrades
victoria-metrics-data VictoriaMetrics time-series database storage
vmagent-remotewrite-data VMAgent remote write buffer and cache (only used during migrations from version 3)

Container environment

Most of NAbox components are delivered as containers inside the virtual appliance.

Containers are managed by Docker Compose, you can find the files for the docker compose environment in /etc/nabox/.

File Description
compose.yaml Main configuration file, it should not be modified because it will be overwritten with each update
.env This file should never be modified, it contains default values for internal NAbox settings
.env.custom Environment in this files overrides default values from .env and is managed by NAbox web ui

You can interact with the compose stack with the dc command line utility, which is a proxy to docker compose :

#!/bin/bash
COMPOSE_FILES=`ls -r /etc/nabox/compose*.yaml|sed -e "s/^/-f /"|tr "\n" " "`
docker compose $COMPOSE_FILES --env-file /etc/nabox/.env --env-file /etc/nabox/.env.custom $@

The following containers are deployed with NAbox :

Container Description
alertmanager Alert Manager manages email notifications for various rules declared by NetApp Harvest
cadvisor Exports containers usage statistics for NAbox specific dashboard
grafana Grafana Dashboards
havrest NetApp Harvest 2.x container
node-exporter Exports NAbox host usage statistics for NAbox dashboard
caddy Reverse proxy for all NAbox http services
victoria-metrics Victoria metrics stores metrics coming from NetApp Harvest 2.x
vmagent vmagent is used to transform metrics coming during migration from NAbox 3
vmalert Interface to the various alerts provided in standard Harvest dashboards deployed in Grafana

Here are various examples of how you can use dc command to display containers data :

admin@localhost ~ $ dc ps
NAME               IMAGE              COMMAND                  SERVICE            CREATED              STATUS                    PORTS
alertmanager       alertmanager       "/bin/alertmanager -…"   alertmanager       3 weeks ago          Up 57 seconds             9093/tcp
cadvisor           cadvisor           "/usr/bin/cadvisor -…"   cadvisor           3 weeks ago          Up 57 seconds (healthy)   8080/tcp
grafana            grafana-oss        "/run.sh"                grafana            2 weeks ago          Up 57 seconds             127.0.0.1:3000->3000/tcp
havrest            havrest:latest     "/havrest"               havrest            About a minute ago   Up 57 seconds             8080/tcp
node-exporter      node-exporter      "/bin/node_exporter …"   node-exporter      3 weeks ago          Up 57 seconds             9100/tcp
caddy              caddy              "caddy run --config …"   rproxy             3 weeks ago          Up 57 seconds             0.0.0.0:80->80/tcp, :::80->80/tcp, 0.0.0.0:443->443/tcp, :::443->443/tcp, 443/udp, 2019/tcp
victoria-metrics   victoria-metrics   "/victoria-metrics-p…"   victoria-metrics   2 weeks ago          Up 57 seconds             127.0.0.1:8428->8428/tcp
vmagent            vmagent            "/vmagent-prod -remo…"   vmagent            4 days ago           Up 57 seconds             8429/tcp
vmalert            vmalert            "/vmalert-prod -rule…"   vmalert            3 weeks ago          Up 57 seconds             8880/tcp

$ dc logs -f --tail 10 havrest
Attaching to nabox-api
havrest  | 2024-03-30T17:52:23Z INF collector/collector.go:608 > Collected Poller=7-mode apiMs=82 bytesRx=3393 calcMs=0 collector=ZapiPerf:HostAdapter exportMs=0 instances=9 instancesExported=9 metrics=18 metricsExported=18 numCalls=1 parseMs=1 pluginMs=0 pollMs=83 skips=0 zBegin=1711821143313
havrest  | 2024-03-30T17:52:23Z INF collector/collector.go:608 > Collected Poller=7-mode apiMs=79 bytesRx=19418 calcMs=0 collector=ZapiPerf:Disk exportMs=1 instances=10 instancesExported=50 metrics=180 metricsExported=213 numCalls=1 parseMs=4 pluginMs=1 pollMs=85 skips=0 zBegin=1711821143313
havrest  | 2024-03-30T17:52:23Z INF collector/collector.go:608 > Collected Poller=7-mode apiMs=87 bytesRx=693 calcMs=0 collector=ZapiPerf:NFSv4Node exportMs=0 instances=1 instancesExported=1 metrics=4 metricsExported=4 numCalls=1 parseMs=0 pluginMs=0 pollMs=87 skips=0 zBegin=1711821143312
havrest  | 2024-03-30T17:52:23Z INF collector/collector.go:608 > Collected Poller=7-mode apiMs=86 bytesRx=1969 calcMs=0 collector=ZapiPerf:SystemNode exportMs=0 instances=1 instancesExported=1 metrics=21 metricsExported=20 numCalls=1 parseMs=0 pluginMs=0 pollMs=87 skips=0 zBegin=1711821143312
havrest  | 2024-03-30T17:52:23Z INF collector/collector.go:608 > Collected Poller=7-mode apiMs=87 bytesRx=1097 calcMs=0 collector=ZapiPerf:NFSv3Node exportMs=0 instances=1 instancesExported=1 metrics=9 metricsExported=6 numCalls=1 parseMs=0 pluginMs=0 pollMs=88 skips=0 zBegin=1711821143312
havrest  | 2024-03-30T17:52:23Z INF collector/collector.go:608 > Collected Poller=7-mode apiMs=85 bytesRx=992 calcMs=0 collector=ZapiPerf:CIFSNode exportMs=0 instances=1 instancesExported=1 metrics=15 metricsExported=14 numCalls=1 parseMs=0 pluginMs=0 pollMs=86 skips=0 zBegin=1711821143314
havrest  | 2024-03-30T17:52:23Z INF collector/collector.go:608 > Collected Poller=7-mode apiMs=86 bytesRx=5801 calcMs=0 collector=ZapiPerf:VolumeNode exportMs=0 instances=2 instancesExported=2 metrics=72 metricsExported=68 numCalls=1 parseMs=1 pluginMs=0 pollMs=88 skips=0 zBegin=1711821143312

Note that the docker compose stack is normally managed by the systemd service "nabox", and all logs from docker compose are sent to journald as well :

$ sudo journalctl -fu nabox
Jan 04 12:42:29 localhost dc[2055062]: havrest           | time=2026-01-04T12:42:29.554Z level=INFO source=collector.go:622 msg=Collected Poller=unix collector=Unix:poller task=instance apiMs=0 bytesRx=0 instances=0 numCalls=0 pollMs=1 zBegin=1767530549553
Jan 04 12:42:39 localhost dc[2055062]: havrest           | time=2026-01-04T12:42:39.138Z level=INFO source=collector.go:601 msg=Collected Poller=unix collector=Unix:poller apiMs=0 bytesRx=0 calcMs=0 exportMs=0 instances=0 instancesExported=4 metrics=0 metricsExported=128 numCalls=0 parseMs=0 pluginInstances=0 pluginMs=0 pollMs=2 renderedBytes=12247 zBegin=1767530559136
Jan 04 12:42:39 localhost dc[2055062]: caddy             | {"level":"warn","ts":1767530559.7664077,"logger":"http.handlers.reverse_proxy","msg":"aborting with incomplete response","upstream":"harvest-mcp:8082","duration":0.001261025,"request":{"remote_ip":"10.65.176.50","remote_port":"43256","client_ip":"10.65.176.50","proto":"HTTP/1.1","method":"GET","host":"nabox.fr.netapp.com","uri":"/","headers":{"Sec-Fetch-Mode":["cors"],"User-Agent":["node"],"X-Forwarded-Proto":["http"],"X-Forwarded-Host":["nabox.fr.netapp.com"],"Via":["1.1 Caddy"],"Accept-Encoding":["gzip, deflate"],"X-Forwarded-For":["10.65.176.50"],"Mcp-Protocol-Version":["2025-06-18"],"Mcp-Session-Id":["3TTZMVING6WOJJQUX3WZ6C3KQF"],"Authorization":["REDACTED"],"Accept":["text/event-stream"],"Accept-Language":["*"]}},"error":"reading: context canceled"}
Jan 04 12:42:39 localhost dc[2055062]: caddy             | {"level":"info","ts":1767530559.7683485,"logger":"http.log.access.log1","msg":"handled request","request":{"remote_ip":"10.65.176.50","remote_port":"43256","client_ip":"10.65.176.50","proto":"HTTP/1.1","method":"GET","host":"nabox.fr.netapp.com","uri":"/mcp/","headers":{"Authorization":["REDACTED"],"Accept":["text/event-stream"],"Accept-Language":["*"],"Connection":["keep-alive"],"Mcp-Session-Id":["3TTZMVING6WOJJQUX3WZ6C3KQF"],"Sec-Fetch-Mode":["cors"],"User-Agent":["node"],"Accept-Encoding":["gzip, deflate"],"Mcp-Protocol-Version":["2025-06-18"]}},"bytes_read":0,"user_id":"","duration":0,"size":0,"status":200,"resp_headers":{"Cache-Control":["no-cache, no-transform"],"Content-Type":["text/event-stream"],"Date":["Sun, 04 Jan 2026 12:37:38 GMT"],"Access-Control-Allow-Headers":["Content-Type, Mcp-Protocol-Version, Mcp-Session-Id"],"Access-Control-Allow-Methods":["GET, POST, OPTIONS"],"Access-Control-Allow-Origin":["*"],"Via":["1.1 Caddy"]}}

Configurations

Configurations for different parts of NAbox are kept is two different areas : - /etc/nabox for dynamic configurations - /usr/share/nabox for static configurations

Dynamic configurations are files modified by normal operations with NAbox, like adding or removing clusters, configuring LDAP, configuring ssl, etc.

Status configurations are in the read-only area of NAbox and cannot be changed. This is the reverse proxy configurtion, Grafana dashboards for NAbox (not from Harvest), victoria metrics scrapers definition, etc.

These files, from either static or dynamic locations are presented to the relevant containers through docker volumes defined in /etc/nabox/compose.yaml

If you have a doubt where to find configuration for any of the containers, that's where you should start, you will find a block like this for alertmanager, for example :

  alertmanager:
    platform: linux/amd64
    image: alertmanager
    container_name: alertmanager
    hostname: alertmanager
    logging:
      driver: "json-file"
      options:
        max-size: "10m"
        max-file: "5"
    volumes:
      - ${NABOX_ETC}alertmanager/:/etc/alertmanager
    command:
      - --config.file=/etc/alertmanager/alertmanager.yml
      - --cluster.listen-address=
      - --web.external-url=${ALERTMANAGER_LINK_URL}
    labels:
      - nabox.core=true
    restart: always

This tells you that alertmanager configurations are in ${NABOX_ETC}alertmanager/. If you're wondering what $NABOX_ETC resolves to, you can check /etc/nabox/.env

As you can see, some parameters can also be passed in the command line under command:, in that case, you want to look at environment variables in .env or .env.custom.

Configuration files

NAbox stores most configuration files in /etc/nabox/ directory :

Directory Description
alertmanager AlertManager configuration contains email notification settings
harvest Harvest 2.x configuration directory.
secrets This directory contains critical authentication token for communication with Grafana, Havrest container and SSL certificates

harvest

In this directory you will find the main configuration file for Harvest. Normally, this file is only managed by NAbox Web UI and shouldn't be modified externally.

It is possible though, to copy this file from one NAbox to another when migrating, or generate this file with external script, as long as you take extra care putting all the required properties for a poller to function properly. Specifically, prometheus_port is required and must be unique across pollers in the file.

When you modify that file, you should see the changes reported in the Web UI.

Note

In the Web UI, only systems having Zapi, ZapiPerf or StorageGrid collectors will be displayed.

Harvest directory contains a few folders

File Description
active This directory contains the Harvest configuration templates compiled from standard NAbox templates and user defined templates
user User template directory where you can add your own customizations (see faq)
nabox This directory contains default NAbox customizations and user-enabled options like workloads collections

secrets

The secrets directory maintains the tokens used for communication between the different products in NAbox

File Description
api-tokens Contains internal and user-generated tokens
grafana-secret Grafana API token used when adding datasources or customizing dashboards properties from Web UI
havrest-secret Harvest REST frontend authentication token (Documentation on https://<nabox ip>/havrest/ui/)
jwt-secret JWT seed, unique for each NAbox deployment, used to generate Web UI session token
ssl Certificate files for NAbox HTTP server

Making changes

If you need to adjust different parameters, here is a few ground rules :

  • Do not modify /etc/nabox/.env, it will be overwritten on upgrade
  • Do not modify /etc/nabox/compose.yaml, it will be overwritten on upgrade

You can, however, make changes to .env.custom and use docker compose override files compose-<whatever>.yaml to alter environment and docker compose stack. When creating override files for compose, be aware of the following rules.

If you need to alter configurations that are in the read-only area of NAbox (/usr/share/nabox/), you will have to copy that content somewhere read-writable (i.e. /etc/) and alter docker compose configuration so it points to that location.

Again, do not change compose.yaml directly, instead, create a compose-custom.yaml file and re-declare the portion to override (and don't forget to read the merge rules, which tells you, for example, command: values have to be fully re-written, but env: values are merged)