π In this article
If you're running Home Assistant on its own dedicated box, it's likely doing the job, but it's also barely scratching the surface of what that hardware can actually handle.
And the moment you try to expand it, add cameras, AI detection, or extra services, things can get messy pretty quickly.
Now imagine running all of that, properly structured, on something about the size of a paperback book.
That's what we're going to build here.
We're taking the ZimaBoard 2, installing Proxmox, and turning it into a smart home server with Home Assistant, Frigate, HACS, and everything running locally, but more importantly, running in isolation.
So by the end of this video, you won't just have something that works⦠you'll have something that actually scales when you need it to.
We're going to take a fresh ZimaBoard 2, install Proxmox, set up Home Assistant, add HACS and Frigate, and build the whole thing step by step. And crucially, Frigate is going to have its own local storage, just like a proper NVR would.
It's all chaptered, so you can follow along or skip around if you need to, but by the end, you'll have a complete smart home and security setup all running locally.
Why this setup?
Before we install anything, it's just worth explaining why we're doing it this way, because at first glance, you might be thinking, why not just install Home Assistant directly and be done with it.
And honestly, that's exactly what I used to do.
You've got a single machine, Home Assistant running directly on it. Everything is simple, fast, and for a lot of setups, it works absolutely fine.
But the problem starts when you begin expanding it. The moment you start adding cameras, AI detection, or extra services, everything starts competing for the same resources on that one system.
And that's where Proxmox changes the approach completely.
Because instead of everything sitting in one shared environment, you turn the ZimaBoard into a proper virtualisation host. So now each part of your setup is isolated from the others, rather than all fighting for the same resources.
So in this build, Home Assistant is running in its own virtual machine, Frigate is running in its own container, and anything else you add later sits separately as well.
And that isolation is really the key idea here. Because if something goes wrong in Home Assistant, your camera system keeps running.
If Frigate has an issue, your smart home doesn't go down with it.
And if you want to experiment with something new, you can do that safely without risking the rest of your setup.
So this isn't just about performance. It's about control, separation, and being able to recover or rebuild parts of your system without everything collapsing at once.
Hardware overview
So this is the ZimaBoard 2 itself. It comes in this great sustainable packaging with a note from the founder of IceWhale Technology, and it's a nice chunk of metal to hold in your hand, it just doesn't feel cheap at all.
It's a single board server, and compared to the original version, this is a pretty significant upgrade.
It runs an Intel N150 quad-core processor, boosting up to 3.6GHz. What that gives us in practice is enough performance to comfortably run multiple virtual machines at the same time without immediately running into resource limits.
You can get it with up to 16GB of LPDDR5 memory, which is what I'm using here. And once you start running Home Assistant alongside something like Frigate, memory is one of the first constraints you actually run into. So that extra headroom just helps keep everything stable when multiple services are running together.
For storage, you get onboard eMMC for ZimaOS, 64GB on the model I'm using, plus dual SATA ports with power connectors. So you can add SSDs or hard drives directly, which means you can easily expand this into a small NAS setup later if you want to build on it.
In terms of connectivity, you've got dual 2.5 gigabit Ethernet, USB 3.1 ports, PCIe expansion, and a Mini DisplayPort, so there's also room here for NVMe expansion or other hardware add-ons if you need them.
And the entire chassis acts as one large heatsink, and it's built around a 10-watt TDP, so it's designed to be efficient enough to run 24/7 without worrying about power use.
Don't let the size fool you, this isn't a Raspberry Pi. The N150 is a proper x86 processor, and it handles multiple demanding tasks simultaneously without even breaking a sweat.
| Specification | Detail |
|---|---|
| CPU | Intel N150 quad-core, up to 3.6GHz |
| RAM | Up to 16GB LPDDR5 |
| Built-in storage | 64GB eMMC (16GB RAM model) / 32GB eMMC (8GB RAM model) |
| SATA | Dual SATA ports with power connectors |
| Ethernet | Dual 2.5 Gigabit ports |
| USB | USB 3.1 ports |
| Expansion | PCIe slot, NVMe capable, Mini DisplayPort |
| TDP | 10W - chassis acts as heatsink, designed for 24/7 operation |
What you'll need
Before we start, let's quickly go through what you'll need to get this set up.
You'll need an 8GB USB drive, which we'll use for the Proxmox installer so we can actually get the system onto the ZimaBoard in the first place.
For storage, you'll want a SATA SSD or hard drive, ideally something designed for constant writes if you're planning on using Frigate, because that's where all your camera footage is going to be recorded and stored over time.
You'll also need an Ethernet cable so you can connect the ZimaBoard directly to your network.
And then finally, you'll need a monitor and keyboard, but only for the initial setup. Once Proxmox is installed and running, everything is managed through the web interface, so those can be removed and the system can just run from that point onwards.
Configuration
Now, the ZimaBoard 2 I'm using here is the 16GB version, and that comes with a built-in 64GB eMMC drive. The 8GB model has 32GB, and that's enough to install Proxmox on its own, but it does start to fill up quicker than you expect once you start adding in system updates and uploading ISO images for virtual machines.
So for this setup, I'm going to use a 256GB SSD as the main Proxmox drive, which just gives us a lot more breathing room for everything we're going to build.
And then for storage, I'm adding a 6TB hard drive, which we're going to use for VM storage and container data, and later on, this is also where Frigate will store all of its camera footage.
All of this goes into the little mini rack you can get with the ZimaBoard 2. It supports two 3.5-inch drives, and the ZimaBoard itself just screws onto the top, which turns it into a really compact setup that you can easily leave sitting on a desk or tucked away in a cupboard.
ZimaOS
It's worth mentioning that the ZimaBoard 2 actually comes with something called ZimaOS pre-installed on the built-in eMMC storage.
ZimaOS is a really capable operating system in its own right, it gives you a range of apps you can install, including Home Assistant, and it also supports NAS functionality straight out of the box.
So if you wanted a much simpler setup, you could absolutely just use ZimaOS on its own and have a working system very quickly.
But in this case, we're going to build on top of it using Proxmox, because it gives us more flexibility in how we structure things, especially when it comes to running Home Assistant and Frigate in properly isolated environments.
BIOS changes
Now with all of that in place, the first thing we need to do is get into the BIOS, and on the ZimaBoard that's done by pressing Delete during boot.
Once you're in here, there are three settings we need to go through, and these are important, as most people don't realise these need changing and then spend ages wondering why they can't install Proxmox.
First is Intel Virtualisation Technology, and we want to make sure that's enabled. This is essential, because without it, Proxmox can't actually run virtual machines properly.
Next is Secure Boot, and we want to disable that. The reason for this is that Secure Boot can sometimes prevent Proxmox from booting unless it's set up and configured correctly, so for simplicity in a home lab setup, we just turn it off.
Finally, there's Restore AC Power Loss, and we want to set that to Power On. This just means that if the power ever goes out, the system will automatically start back up again without needing any manual intervention.
What I found interesting here, and kudos to IceWhale, these settings were all set correctly by default, so there's no hunting around trying to figure out what to change. These guys have this pre-configured for you, ready to be a server.
| Setting | Value | Why |
|---|---|---|
| Intel Virtualisation Technology | Enabled | Essential, without this Proxmox cannot run virtual machines properly |
| Secure Boot | Disabled | Can prevent Proxmox from booting unless set up and configured correctly |
| Restore AC Power Loss | Power On | System automatically starts back up after a power cut, no manual intervention needed |
Installing Proxmox
Now with those BIOS settings in place, the next step is to install Proxmox.
So we need to go to the Proxmox website and download the latest version, and at the time of recording this video, that's 9.1.
Once you've got that downloaded, we need to flash it onto a USB drive. For this I'm using Balena Etcher, which is free and really simple to use. You just select the Proxmox ISO file, select your USB drive, and then flash it. And once that's finished, you've got your installer ready to go.
With that done, plug the USB drive into the ZimaBoard 2, connect a monitor, keyboard, and network cable, and then power it on.
To boot from the installer, we need to get into the boot menu. On the ZimaBoard this is usually F11 during startup, and then select the USB drive as the boot device.
Once that's done, we're now into the Proxmox installer. From here it's pretty straightforward. We accept the licence agreement, and choose the drive we're installing to, in my case I'm using the 256GB SSD rather than the onboard eMMC.
After that, we set the country and timezone, and then we create a root password, which is what we'll use to manage the entire system going forward.
Now the next step is really important, and this is where we set our network configuration. We want to assign a static IP address to Proxmox.
The reason for this is simple. This machine is going to host all of our services, and a lot of what we build later will connect directly to it. So if that IP address ever changes, anything pointing to it, like browser bookmarks, integrations, or management access, will stop working until it's updated.
I also set a hostname at this point as well, just to keep things clean on the network.
Once that's all done, Proxmox installs to the drive and then reboots, and at that point you can remove the USB stick.
From here onwards, everything is managed through a web browser. So on another device connected to the same network, we just go to the IP address we set, on port 8006. So something like:
We log in with the username root and the password we just created, and now we're inside the Proxmox dashboard. At this point, the ZimaBoard is effectively a full virtualisation server.
The first time you log in, you'll see a subscription message, and that's expected. Proxmox is open source, but enterprise updates require a subscription. For a home setup like this, we can continue without it.
And occasionally you might find that clicking OK doesn't dismiss that message properly, if that happens, just refresh the page and it clears it.
Before we start building anything, we're going to run a quick community helper script that tidies up the system and disables that subscription message for a home lab setup.
If you're using Proxmox in a business environment, then you should absolutely consider a subscription for proper support, but for a smart home or homelab setup, this just makes things a bit more convenient.
To do that, we open a shell inside Proxmox and run the command shown on screen, I'll also put it in the description below.
When it runs, you'll get a few prompts. In most cases, you can just follow along with the defaults I select here, enabling the no-subscription repository, disabling the enterprise repo, and turning off the subscription nag.
After that completes, we accept the final prompts to update Proxmox and reboot the system, and then we're ready to start building out our virtual machines.
Installing Home Assistant
OK, so once Proxmox has rebooted and you've logged back in, we're now going to create our first virtual machine, and that's going to be Home Assistant.
Again, the easiest way to do this is using the Proxmox Helper Scripts community project, and there is a dedicated Home Assistant OS script that automates most of the setup for us and reduces the chance of mistakes.
So we open a shell inside Proxmox, and at the prompt we run the command shown on screen. I'll also put this in the description so you can copy it directly.
When we run it, the script will ask a few questions. The first is whether we want a default or advanced setup, and for this, I'm going with the default configuration.
It will also ask if we want to keep the Home Assistant image that gets downloaded. In most cases, you can say no here unless you're planning to spin up multiple Home Assistant instances.
Once that finishes, Proxmox builds the VM for us automatically, and after a short while you'll see Home Assistant appear on the left-hand side under PVE. The icon shows it's a virtual machine, and when the green indicator appears, it's running.
If we open the console, we can watch Home Assistant boot up, and once it's ready we'll see the usual Home Assistant startup screen with its IP address. All this is super snappy thanks to the N150 processor on the ZimaBoard 2.
From another browser window, we can then go to that IP address on port 8123, and we'll land on the Home Assistant setup screen. If you're migrating from another system, this is where you can restore a backup, or you can just continue with a fresh install.
Now before we go any further, we're going to shut the VM down again, because there are a couple of important tweaks we need to make.
The first is USB passthrough. If you're planning to use something like a Zigbee or Z-Wave dongle, we need to pass that through from the ZimaBoard into the virtual machine.
In Proxmox, there are two options here, you can pass through an entire USB port, which means anything plugged into it will be forwarded to the VM, or you can pass through a specific USB device using its vendor and device ID.
For this setup, I'm going to use the device-specific passthrough, because it gives us a bit more control.
The next change is storage caching. By default, Proxmox uses a cache mode, but for Home Assistant we want to change this to write-through.
This makes sure data is written directly to disk more reliably, which is especially important for something that's running 24/7 like Home Assistant.
Once that's done, we can start the VM again by right-clicking and selecting start.
And the final step is to set a static IP address inside Home Assistant itself. So once it's booted, we go into our network settings, and under IPv4 we set a static address.
If you change the IP here, just remember you'll need to update the browser address as well so you don't lose access.
Storage drive configuration
I'm now going to add the hard drive as available storage within Proxmox.
If we go into the Disks section, we can see the additional drive listed there, and this is the one we're going to use for storage in this setup.
But before we do anything with it, we need to wipe the disk completely, so there's nothing left on it that could interfere with how we're going to configure it.
So we select the drive, choose wipe, and let that complete.
Once that's finished, we create a directory-based storage from that disk. This tells Proxmox that the drive is now available for large, persistent data, such as Frigate recordings and other files that build up over time.
If you prefer working from the command line, you can also set this up using a shell command instead, and I'll include that in the description.
At this point, Proxmox is installed on the SSD, and that's also where the system and virtual machines live, with the hard drive doing all the heavy lifting for recordings.
Installing Frigate
OK, so now we're going to install Frigate.
For this, we're going to create what's called an LXC container rather than a full virtual machine. Containers are much lighter than VMs, so they use fewer resources, which makes them ideal for something that's going to be running all the time like Frigate.
To make the installation easier, we're going to use the Proxmox Helper Scripts community project again, and there's a dedicated Frigate script that automates most of the setup for us.
So we open a shell inside Proxmox, and run the command shown on screen. I'll also include it in the description so you can copy it directly.
When we run it, the script will take a little while to complete, and we'll get a few prompts along the way. I'm choosing not to share anonymous data, then for storage we select the local template storage, and for the container itself we use local-lvm, and then we let the install complete.
Once that finishes, the Frigate container will be created in Proxmox.
At this point, we shut it down so we can make a couple of important changes before starting it properly.
The first change is storage configuration. We need to make sure Frigate uses the hard drive we set up earlier, because that's where we want it to store recordings.
To do that, we open the container configuration file, in my case that's the .conf file for container 101 inside Proxmox, and edit it using the nano editor.
Inside that file, we add a mount point that points to the directory we created earlier on the 6TB drive.
Once that's saved, we restart the container.
Now if everything is correct, Frigate will be using the dedicated hard drive for storage rather than the system disk.
And you can confirm that by checking the storage path inside the container, it should now point to that mounted directory instead of the default storage location.
Installing HACS
Now that Frigate is up and running, the next thing we're going to do is install HACS inside Home Assistant.
This is what gives us access to the wider Home Assistant community ecosystem, so our integrations, custom cards, and additional features that aren't included by default.
So in a new browser window, we go to the HACS website, click on "Start using HACS", and then download HACS.
We're then given a repository URL, so we copy that.
And then back inside Home Assistant, we go into Apps, and add that URL as a new repository. From there, we can search for HACS and install it like a normal app.
Once installation completes, we start it up, and then check the logs, and this will usually tell us that Home Assistant needs to be restarted before continuing.
Once we've restarted Home Assistant and it's back up, we go into Integrations and complete the HACS setup.
During setup, we'll need to tick the boxes confirming we understand what HACS does, and then follow the on-screen steps.
At this point, you'll also need a GitHub account for authorisation, so if you don't already have one, you'll need to create it.
Once you've authorised HACS through GitHub, you can come back into Home Assistant and complete the setup, and HACS will now be fully available inside your system.
Configuring MQTT
Next up is MQTT, and we need this because Frigate uses it to communicate with Home Assistant.
So the first thing we do is go into Home Assistant and create a new user. I normally call this something like MQTTUser, and make sure that login is enabled. Then we set a username and password for that account.
Once that's done, we go into Home Assistant Apps and install something called Mosquitto Broker.
After it's installed, we go into the configuration tab, and under logins we add a new entry using the username and password we just created. We save that configuration, and then restart Home Assistant.
Once Home Assistant is back up, we go back into Apps and start the Mosquitto Broker. Then we check the logs just to make sure there are no errors showing up.
If everything looks good, we go into Integrations, and at that point MQTT should now appear automatically. So we just add it from there to complete the setup.
Home Assistant Frigate setup
Next up is installing Frigate in Home Assistant.
So we go into HACS, search for Frigate, and install it.
Once it's installed, we restart Home Assistant to complete the setup.
After the restart, we go into Integrations and add Frigate. So we search for it, select it from the list, and in the setup window we enter the URL for our Frigate server, which is the IP address of the Frigate container, plus the port it's running on.
Once that's added, the integration should complete.
However, if we look at the Frigate entities at this point, we'll notice they're not available yet.
And that's because we still need to properly configure Frigate itself inside Proxmox before Home Assistant can fully see and use it.
Configuring Frigate
We're on the home straight now, and the last part of this build is to configure Frigate according to our needs.
I'm going to put in a configuration that works with one of my cameras, and I'll include a link to that configuration in the description, but you will need to adjust it to match your own setup. I strongly recommend reading the Frigate documentation so you can get the best from it.
There are a few important parts in the configuration that you need for this setup.
The first is the MQTT configuration, because this is what allows Frigate to communicate with Home Assistant.
The second is storage, where we define where Frigate stores its recordings and its database.
The recordings are sent to the 6TB hard drive we set up earlier, which is used for all the large, continuous camera footage. The database itself stays on the system SSD, in the container filesystem, which keeps it fast and responsive.
One other important part of the configuration is enabling hardware acceleration and OpenVINO. So we set FFmpeg to use automatic hardware acceleration, and we set the detector to use OpenVINO on auto.
What this does is allow Frigate to use the ZimaBoard's Intel hardware for video processing and AI detection. Without this, Frigate would be leaning heavily on the CPU for every detection event. With it, the processor is barely used, and that's largely down to the Intel iGPU built into the N150.
Once the configuration is in place, we restart Frigate.
Then we go into Mosquitto Broker in Home Assistant, and if everything is correct, we should see in the logs that it has successfully connected using the username we set earlier.
Next, we go into MQTT in Home Assistant, open the settings, and click "Configure MQTT options", then submit.
After that, we reload the MQTT broker.
Back in Proxmox, we reboot the Frigate container, and after a few seconds we should see the camera feed appear in the Frigate interface.
One final step I do is go back into the Frigate integration inside Home Assistant and reload it, and after that, all of the entities should appear correctly.
The end result
So that's it. As you can see, with everything set up, the ZimaBoard 2 is only using a very small amount of processing power to run both Home Assistant and Frigate.
I've got a full Home Assistant smart home setup and a local NVR running on just one tiny computer, and right now, overall usage is sitting at less than 10% CPU, and roughly a third of the available RAM, which is exactly what you want to see from a system like this.
And in terms of power consumption, it's drawing about 15 watts of power, which for something that's running a hypervisor, Home Assistant and Frigate, is only slightly more than my mini PC running just Home Assistant.
If we look at the Frigate system stats, the detector inference speed is sitting at just over 9 milliseconds, that's how fast OpenVINO is processing each detection pass. The detector itself is only using about 2% of the CPU. The Intel hardware inside this ZimaBoard 2 is doing exactly what it's supposed to.
I've been really impressed with how well this ZimaBoard 2 has performed and how well this all holds together. When I said at the start that we'd be running all of this on something the size of a paperback book, this is what that actually looks like.
And if you've been running Home Assistant on an old PC or a Raspberry Pi, and you've wanted to add cameras but weren't sure where to put Frigate, this is a really strong option.
You've got proper virtualisation, you keep your smart home and your NVR cleanly separated, and you've got plenty of room to expand later, and best of all, everything stays completely local.
Now, whilst this setup isn't as simple as just flashing Home Assistant OS onto a Raspberry Pi, it's also not as complicated as it might look at first.
If you follow the steps in this video and take your time with it, it's a very achievable project, and what you end up with is a setup that's more capable, and much easier to maintain long term.
But it doesn't have to stop there. With the processing power this ZimaBoard 2 has, you've got a tonne of headroom left to add a lot more. Want to set up storage for your family photos? Then just install Immich. Want to keep copies of your important documents? Then Paperless is your option. And with the integrated GPU, you can even run Plex on it as well.
If you want to build your own little server like this running Home Assistant and Frigate, or anything else for that matter, then you can pick up the 8GB Zimaboard 2 for 279 US dollars, which given the price of components nowadays is a pretty good bargain. If you opt for the 16GB version, it's just 70 dollars more.
I'll leave an affiliate link in the description, along with an exclusive discount code that Zima have provided just for my audience if you want to pick one of these up, alternatively, you can just scan the QR code shown on the screen here, if you'd prefer to do that.
If you found this video helpful, then I'd appreciate a thumbs up and why not subscribe to the channel for more smart home content like this but as always, thanks for watching and I'll see you in the next video, bye for now.
π ZIMABOARD 2 (8GB) - BUY NOW | ZIMABOARD 2 (16GB) - BUY NOW
Affiliate links - USE CODE ByteofGeek15 FOR $15 OFF