Bind Mount in Unprivileged Container
Preamble
I wrote this article to help anyone trying to mount a drive from their Proxmox host to an unprivileged container. I found myself facing this issue when attempting to mount a media storage data pool to an LXC that was running Jellyfin. While I liked the simplicity of using an LXC with data pools on my Proxmox host, I also wanted the security advantages of running the container without unnecessary privileges. After spending a non-insignificant amount of time learning about Proxmox containers and permissions, I decided to write this article for anyone struggling to do the same.
Create a zfs pool on the Proxmox host
-
Go to Datacenter -> Your Node -> Disks -> click on ZFS
-
At the top, click on "Create: ZFS"
-
Give your ZFS pool an identifiable name, uncheck "Add Storage," choose your preferred RAID, change compression to lz4, select all of the drives you want to use in the storage pool. Then press create
- NOTE: If the drives don't show up, you may need to wipe the disks and try again.
- You can do this by going to Datacenter -> Your Node -> and click on Disks
- Select the disk you want to erase and press "Wipe Disk" at the top
- This will completely erase all the data stored on the drives! Make sure the data is backed up if you don't want to lose it.
- Repeat this process for all of the disks you want to use in the pool.
- You can then go back to the previous step and should see the drives appear
- NOTE: If the drives don't show up, you may need to wipe the disks and try again.
-
Now go to Datacenter -> Your Node -> click on Shell.
-
zfs listshould show your newly created zfs pool -
Use
zfs create <pool-name/backups>this will create a new data set as a subdirectory of of your pool. Make sure to replace "pool-name" with the name you chose for your own ZFS pool.- I recommend creating at least the following datasets:
Backups:
zfs create <pool-name/backups>
zfs create <pool-name/isos>
zfs create <pool-name/media>
- I recommend creating at least the following datasets:
Backups:
-
If you do
zfs listnow, then you should see all your newly created datasets!
Use Newly Created datasets for storage:
- To start using your ZFS pool to actually store data, you need to either add it directly to the storage on the Proxmox host (do this for system related data like proxmox backups, snapshots, ISOs, Container Templates, etc.) or you will need to mount the dataset to your desired LXC and change the directory permissions (do this for personal data like video files, documents, personal backups, pictures, etc.).
- Remember that with a single ZFS pool you can create multiple datasets to accomplish everything listed above.
- One dataset for backups, another for media storage, another for ISOs, etc.
- Remember that with a single ZFS pool you can create multiple datasets to accomplish everything listed above.
Adding Dataset to Proxmox Storage
-
Do this with datasets that you want to use for storing proxmox data like: LXC/VM Backups, Snapshots, ISOs, Container Templates, etc.
- You should NOT do these steps for datasets that you want to use for storing personal/non-system data (Pictures, Videos, Audio files, Laptop/Desktop Backups, etc. ). Use the next section for these datasets.
-
Go to Datacenter -> click on Storage
-
Click on "Add" and select "Directory" from the dropdown
If you want to use a dataset for Backups, then do the following:
- ID: `Backups`
- Directory: `/<pool-name/backups`
- This example uses the dataset "backups", but if you called it to something else then use that instead (you should have probably just called it backups though).
- Content: `VZDump backup file, Snippets`
- Enable: `yes`- The rest you can leave as default unless you have special requirements.
- You should now see your newly created
Backupsin the storage panel.
Automatic Backups:
This will create a "Backup Job" to make automatic backups on a recurring basis.
- Go to Datacenter -> Backup and click on "Add"
- Make sure "Backups" is selected for storage
- You can now choose your own options for which nodes to backup, which VMs, schedule, etc.
Manual Backups
If you want to manually make a backup of a LXC/VM
- Datacenter -> Your Node -> The LXC/VM you want to backup -> click on "Backup"
- Select "Backup now" at the top
- Make sure "Backups" is selected for storage
- Choose the mode you want to use. (I usually use snapshot, but you can Google these options to find out more info).
- Adjust the additional settings how you wish (these options won't be covered in this guide).
- Press "Backup"
- This will create a backup of the LXC/VM stored in your Backups dataset on your ZFS pool.
If you want to use another dataset for ISOs and Container Templates do the following:
- ID: `ISOs`
- Directory: `/<pool-name/isos>`
- Or whatever you called the dataset
- Content: `ISO image`, `Container Templates`
- Enable: `yes`- The rest you can leave as default unless you have special requirements.
- You should now see your newly created
Backupsin the storage panel.
Go to Datacenter -> Your Node -> ISOs (or whatever you called the dataset) You should see "CT Templates" and "ISO Images"
-
To download Container Templates press "CT Templates"
-
Unless you have your own container template you want to use, choose "Templates"
-
You should see a list of templates you can download to use for LXCs.
- Tip: If you don't see the newest templates or turnkey templates, try doing:
pveam updateon your Node's shell to update the default templates.
- Tip: If you don't see the newest templates or turnkey templates, try doing:
-
To download ISO images, simply click on "ISO Images" and upload a local ISO or Download from URL.
Using a Dataset for Personal Data (videos, pictures, laptop backups, archives, etc.)
This section will show you how to pass your dataset to an unprivileged LXC so that users within the LXC can access and write to the dataset.
For the following steps, I will be using pool-name/media as an example of a dataset I want to use for storing videos and pictures. But these instructions can be generalized to any dataset you want to pass to LXCs.
- Start by creating an unprivileged LXC
- These instructions won't go into how to make a LXC or what options you should select
- But generally there is nothing special you need to select when setting up your LXC to pass in your dataset.
In order for your LXC users to have access to the dataset, you need to change its permissions. This is because by default root users of unprivileged containers are mapped to uid 100000, to prevent the LXC from gaining root access to your host system. The dataset's default owner is the host root (NOT the container root), so if a user in the container tries to write to the dataset it will not be allowed (since the container root is mapped to uid 100000 on the host, and the dataset doesn't allow anyone other than the Host root to write to it).
-
In the host shell, if you do
ls -ld /pool-name/mediathen you'll see that the owner of the dataset is the Host root. (This prevents anyone else from writing to the file).drwxr-xr-x 4 root root
-
Furthermore, if you do
cat /etc/subuid /etc/gidyou'll see that a container's root user is mapped to 100000.root:100000:65536, root:100000:65536- What does this mean?: For security, an unprivileged container's root user does NOT have root privileges on the host system. Instead, the container's root user is mapped to a less privileged user with a uid of 100000.
- Since the 100000 user doesn't have permission to write to the dataset, the container's root doesn't have access to it by default.
-
To solve the issue, we need to change the dataset's owner to be uid/gid 100000, and then we simply mount the dataset to the container using the
pctcommand.
- On the host shell do:
chown 100000:100000 /pool-name/media- This will allow the container's root to write to the dataset
- Again on the host shell:
pct set 101 -mp0 /pool-name/media,mp=/mnt/media101is the container ID (change this to whatever the ID is of the container you want to use)mp0- Unless you have another drive mounted to the container already, you can probably keep this the samemp=/mnt/mediathis is the location in your container the dataset will be mounted. You can set this to whatever you'd live (such as:mp=/shared)
And that should be everything.
- Restart the container
- Login as root
- Try
cd /mnt/mediato see if you you can access the dataset - To check your write access try:
touch test123.txt- If you don't see any errors everything was successful
- Use
rm test123.txtto get rid of the test file.
Your container's root user should now be able to read, write, and execute on the mounted dataset.
- To share the dataset with other containers:
- Repeat the process while making sure to change the container ID to your desired container:
- On the host shell do:
chown 100000:100000 /pool-name/media - Again on the host shell:
pct set 101 -mp0 /pool-name/media,mp=/mnt/media
- On the host shell do:
- Repeat the process while making sure to change the container ID to your desired container:
Using non-root user to access dataset in container
For best security, it's recommended to use a non-root user to run services in your containers.
- Using an unprivileged container already adds a lot of security by mapping the container's root user to a non-root user on the host. Someone with root access in the container would still need to escalate their privileges to gain control of the host system.
- However, using a non-root user in the container adds an additional level of security by adding two areas of isolation in which an attacker would need to escalate their privileges. First they would need to gain root access of the container, and then they would ALSO need to gain root access of the host.
In the container while logged in as root:
- Setup a non-root user and group:
-
groupadd -g 9030 sharegroup -
useradd -u 9030 -g sharegroup john -
This example creates a non-root group with the gid 9030 called sharegroup (the gid and group name are arbitrary as long as the gid is not 0, and the gid stays consistent between all the containers you want to share the dataset between).
-
The user john is arbitrary and can be named whatever you want.
- Make new directories to store data
-
mkdir /mnt/media/Videos -
mkdir /mnt/media/Pictures -
etc.
-
This example creates two new subdirectories for storing Videos and Pictures
- These subdirectories are arbitrary. You should name them based on whatever you're planning to store in the dataset.
- Change the ownership of the subdirectories:
-
chown -R john:sharegroup /mnt/media/Videos/ -
chown -R john:sharegroup /mnt/media/Pictures/ -
etc.
-
This will make your non-root user 'john' the owner of the subdirectories.
- Change the permissions of the subdirectories:
-
chmod 770 /mnt/media/Videos/ -
chmod 770 /mnt/media/Pictures/ -
etc.
-
This allows all users of the sharegroup with gid 930 to be able to read and write to the dataset.
-
For additional containers, make sure to use the same gid 930 to read/write to the subdirectories.
- Note: This is because the owner of the subdirectories will actually be mapped to 109030 on the host . So other container also need to be a part of the same owner group.
That should be everything. You now have a non-root user that has full access to the subdirectories within the dataset. This allows the non-root user to store data on the mounted dataset!
- You should leave
/pool-name/mediaowned by the container root for security. The non-root user shouldn't have any need to own the mount point. - And remember that
/pool-nameon the host should be owned by the Host root for security.- You can check this by typing
ls -ld /pool-name/and it should say root. - Whereas doing
ls -ld /pool-name-media/should show that it's owned by 100000:100000
- You can check this by typing
Running services as non-root
-
Using root, install your desired service (such as jellyfin).
-
edit the service file by using:
nano /lib/systemd/system/name.service- Such as
nano /lib/systemd/system/jellyfin.servicefor jellyfin
-
Under the
[Service]section of the service file change:User=john- Replace 'john' with the non-root user you want to use
Group=sharegroup- Replace 'sharegroup' with whatever name you chose when making the non-root group.
-
Reload and Restart:
systemctl daemon-reloadsystemctl restart jellyfin- Replace jellyfin with the name of the service you're running.
systemctl status jellyfin- You should see that the service is running under your non-root user.
- Replace jellyfin with the name of the service you're running.
If you found this tutorial helpful:
Author
Franco (FrancoLopezDev)

Comments