Using Virtio-FS directory sharing together with NixOps libvirtd backend
Virtio-FS is the newest technique to share directories of the host system to a virtualized guest machine managed by libvirt. This example will show you how to use it in NixOps to automatically deploy and configure guests with access to shared directories.
I’m using NixOps to configure and setup web services in virtual machines. One of these services is Dokuwiki, which can be deployed relatively easy. I already have Dokuwiki pages from a previous installation and I want to mount the existing data-directory into the Dokuwiki virtual machine using Virtio-FS.
Before starting with the example, you should already have NixOps installed and the libvirtd backend configured. Since both versions, NixOps and libvirt itself, are somehow outdated in the stable NixOS repository, it might help to upgrade to the newer versions from the unstable repository.
Following example will bootstrap and configure a virtual machine which will serve a DokuWiki page on http://wiki.nix-testing.local
:
{
network.description = "pi nix testing";
nix-http = { config, lib, pkgs, … }:{
disabledModules = [
"services/web-servers/caddy.nix"
"services/web-apps/dokuwiki.nix"
];
imports = [
./caddy.nix
./dokuwiki.nix
];
deployment = {
targetEnv = "libvirtd";
libvirtd = {
extraDevicesXML = ''
<filesystem type='mount' accessmode='passthrough'>
<driver type='virtiofs'/>
<binary path='${pkgs.qemu}/libexec/virtiofsd'/>
<source dir='/home/onny/projects/nixops/data/dokuwiki/data'/>
<target dir='data/dokuwiki'/>
</filesystem>
'';
extraDomainXML = ''
<cpu mode='custom' match='exact' check='none'>
<numa>
<cell id='0' cpus='0' memory='2' unit='GiB' memAccess='shared'/>
</numa>
</cpu>
<memoryBacking>
<access mode='shared'/>
</memoryBacking>
'';
};
};
networking = {
firewall.allowedTCPPorts = [ 80 ];
domain = "nix-testing.local";
};
services = {
dokuwiki = {
webserver = "caddy";
sites."wiki.${config.networking.domain}" = {
aclUse = false;
extraConfig = ''
$conf['title'] = 'Project-Insanity';
$conf['userewrite'] = 1;
'';
};
};
};
systemd.mounts = [
{
what = "data/dokuwiki";
where = "/var/lib/dokuwiki/wiki.${config.networking.domain}/data";
type = "virtiofs";
wantedBy = [ "multi-user.target" ];
enable = true;
}
];
};
Use this configuration and deploy it with following commands:
nixops create -d virtiofs-test virtiofs-test.nix
nixops deploy -d virtiofs-test
The example configuration above uses modified modules for DokuWiki and the web server Caddy which are not yet upstreamed. If you want to test this configuration, you’ll have to download the modified module files dokuwiki.nix
and caddy.nix
.
The example explained
So what does this configuration do? nix-http
is the name of our virtual machine which will be installed using the libvirtd backend. The extraDevicesXML
directive configures the additional shared directory. /home/onny/projects/nixops/data/dokuwiki/data
is the directory of the host I want to make accessable to the guest with the identifier data/dokuwiki
.
deployment = {
targetEnv = "libvirtd";
libvirtd = {
extraDevicesXML = ''
<filesystem type='mount' accessmode='passthrough'>
<driver type='virtiofs'/>
<binary path='${pkgs.qemu}/libexec/virtiofsd'/>
<source dir='/home/onny/projects/nixops/data/dokuwiki/data'/>
<target dir='data/dokuwiki'/>
</filesystem>
'';
The extraDomainXML
part is necessary to support Virtio-FS. You can read more about it here.
extraDomainXML = ''
<cpu mode='custom' match='exact' check='none'>
<numa>
<cell id='0' cpus='0' memory='2' unit='GiB' memAccess='shared'/>
</numa>
</cpu>
<memoryBacking>
<access mode='shared'/>
</memoryBacking>
'';
The following part will automatically mount our share into the guest. The target directory is the default location used by the Dokuwiki module.
systemd.mounts = [
{
what = "data/dokuwiki";
where = "/var/lib/dokuwiki/wiki.${config.networking.domain}/data";
type = "virtiofs";
wantedBy = [ "multi-user.target" ];
enable = true;
}
];
Interestingly, we don’t have to change directory or file permissions of the share manually since the Dokuwiki module will use systemd tmpfiles to chown all files and make them read- and writeable by the web server process.