Part 1: Why I’m Building a Local Lab Assistant

Ever since I watched Star Trek: The Next Generation, I’ve wanted my own “computer.”

Not just a laptop. Not just a voice assistant. I mean the kind of system where you can ask a question out loud and it responds intelligently. The kind where Geordi can bounce ideas off an AI in the holodeck. The kind where Data can instantly access massive amounts of information and help reason through a problem.

And then there’s Tony Stark in his lab — talking to J.A.R.V.I.S., asking for specs, running simulations, thinking out loud while building something physical.

That idea stuck with me.

But here’s the thing: I don’t want that system in the cloud. I want it in my workshop.

I’m building a local lab assistant — an AI partner that lives next to my tools.

This idea didn’t come from science fiction alone. It came from very ordinary moments. Moments where I’m in the middle of a CNC job and I can’t remember the exact feed rate for pink foam. Or when I’m adjusting laser settings and second-guessing power and speed. Or when I’m listening to a podcast in the garage and I want to pause and ask a question about something that was just said.

Yes, I could pull out my phone. I could search. I could scroll through bookmarks. I could print out reference sheets and pin them to the wall.

But that breaks flow.

What I want is something conversational. Something I can press, talk to, and release — and it responds. Something I can ask to repeat itself without retyping anything. Something that feels more like a partner than a search engine.

There’s also the reality that making can be isolating. I spend a lot of time building alone. I enjoy that. But there’s a difference between solitude and having no one to bounce ideas off. Even something as simple as talking through a design problem out loud and getting feedback changes how you think.

So part of this project is philosophical.

I want a creative companion in the workshop.

But another part of this decision is practical.

I want it to be local.

Privacy matters to me. I don’t love the idea of every conversation being routed through external servers. A local model means I control the system. It lives on my hardware. It runs on my network. It’s mine.

There’s also the cost side. If I’m experimenting, iterating, building tools on top of tools — those API calls add up. A local model gives me room to experiment freely without worrying about usage creeping upward every time I test something.

And maybe most importantly: this aligns with the Imagine · Make · Repeat philosophy.

IMR isn’t just about building projects. It’s about building systems that help you build projects. It’s about creating infrastructure that reduces friction and increases creative flow.

If I can build a tool that:

  • Transcribes what I say

  • Responds intelligently

  • Speaks back to me

  • Remembers recurring specs and procedures

  • Discusses a video or podcast I’m listening to

  • Helps me think through design decisions

Then I’m not just building a gadget.

I’m building workshop infrastructure.

This is the beginning of that experiment.

The first version won’t be perfect. It will probably be slow. It will break. I’ll choose the wrong model at least once. I’ll wire something incorrectly. I’ll rethink the architecture.

That’s fine.

The goal isn’t perfection.

The goal is iteration.

In the next part, I’ll start with the simplest possible version — a minimum viable lab assistant. A Raspberry Pi. An arcade button. Push-to-talk. Local speech-to-text. A local language model. Text-to-speech.

Press. Talk. Release. Respond.

That’s it.

Small. Local. Repeatable.

And from there, we’ll see how far this workshop computer can evolve.