Our Savior's Experiments
After building his tech fence, PewDiePie went deeper. These videos explore what's possible when you own your hardware, run your own AI, and stop relying on cloud services for everything.
The Supercomputer
PewDiePie didn't stop at a Raspberry Pi. He built a 10-GPU rig capable of running large language models locally. No API keys, no cloud costs, no data leaving his network.
What He Did With It
- vLLM: Runs local AI models for text generation, coding assistance, and experimentation
- Custom YouTube Extension: Built a browser extension powered by his own local AI models — no OpenAI, no Google, just his hardware
- Folding@Home: When the GPUs aren't running AI, they contribute to distributed scientific computing — folding proteins for medical research
Why It Matters
This isn't about having the biggest rig. It's about proving that you can run powerful AI locally. Most people don't need 10 GPUs — a single decent GPU with Ollama gets you 80% of the way there. But PewDiePie showed what the ceiling looks like when you fully commit.
What You Can Take From This
Start Small
You don't need 10 GPUs. Install Ollama on your existing machine, try a 7B parameter model, and see what local AI can do.
Contribute
Folding@Home runs on any GPU. While your computer is idle, it can contribute to real scientific research. Free to set up, zero maintenance.
Build Tools
PewDiePie built a custom YouTube extension. The tools you build for yourself — with your own AI, on your own hardware — are tools no one can take away or paywall.