In Episode 2 of the OpenClaw private AI Agent series, we assemble the custom hardware needed to run AI agents locally. This step turns planning into a working system built for performance, scalability, and full control—no cloud required.
This second video is the hardware build out. Follow us on YouTube @goWizzWare to keep up on the build!
OpenClaw AI Agent Series – Episode 2: Building the System
In Episode 2 of the OpenClaw AI Agent series, we move from planning to execution by assembling the custom hardware designed to run AI agents locally. After breaking down the components in Episode 1, this step focuses on putting the system together with performance, cooling, and future scalability in mind. If you’re looking to build a private AI computer capable of running local LLMs and agent workflows without relying on the cloud, this is where the foundation becomes real.
The most time-consuming part of the build was installing the AIO and getting everything positioned correctly so the power and RGB cables were clean and not under tension. Once that was dialed in, the rest went together quickly. Initial startup was straightforward. We enabled EXPO in the BIOS to get the RAM up to full speed, and Ubuntu 24.04 installed without any issues. Right now, this Mini ITX system is performing well and running private AI models locally, including Qwen 14B, DeepSeek 14B, LLaMA 8B Q8, and Qwen2.5-Coder 14B. VRAM usage is holding steady at around 9.8GB.
Here’s what we used:
- Graphics Card – https://amzn.to/3OwqA64
- Case – https://amzn.to/4sMjkkG
- Motherboard – https://amzn.to/4vJLAHA
- Power Supply – https://amzn.to/4mA3YOI
- Memory – https://amzn.to/48FI2vO
- Hard Drive – https://amzn.to/3OxJXvx
- CPU – https://amzn.to/4sIzgV0
- AIO – https://amzn.to/4tsmxY0
- Speakers – https://amzn.to/4mDPnBL
👉Message me if you want more details!
Follow along on YouTube @InSiteinc for what’s next in our build!
Benefits of a Private AI Agent System
- No API costs or usage limits
- Full control over your data and workflows
- Runs completely offline for privacy and reliability
- Customizable hardware and software stack
- Ability to run multiple AI models with assigned roles
- Faster response times with local processing
- Scalable platform you can expand over time
This is the foundation for what we’re building with OpenClaw and will be a system designed to support persistent AI agents that can evolve into client tools, not just chat interfaces.
The Goal of the OpenClaw System
This series is not about building a one-off machine. The goal is to create a repeatable, scalable local AI platform that can:
- Run multiple LLMs locally
- Support agent workflows
- Integrate tools and automation
- Operate without internet dependency
Over time, this evolves into a system that behaves more like a persistent assistant layer, not just a prompt-response tool.
What’s Coming Next
Now that the hardware foundation is in place, the next steps in the series will move into software and orchestration:
- Installing and configuring local LLM runtimes
- Setting up the AI interface layer
- Assigning roles to different models
- Building the agent pipeline
This is where the system starts to come alive. {ALIVE! in my Dr. Frankenstein voice}
Who This Series Is For
This series is built for:
- Developers and builders who want control over AI
- Business owners exploring private AI solutions
- Anyone tired of API limits and recurring costs
- People who want to understand how AI systems actually work under the hood
Final Thoughts
If you’ve been looking for a way to get into AI without relying on someone else’s infrastructure, this is your entry point. Follow along here and on our YouTube channel. Have questions? Contact us!





