Local AI

Intelligence that runs on your hardware. No cloud, no subscription, no surveillance.

Every time you use ChatGPT, Google, or any cloud AI — your question, your data, your context is sent to a server in America. It's stored. It's analysed. It's used to train the next model. You paid for the answer and gave away the question.

Local AI runs on a box in your building. The question never leaves the room.

Why This Matters for Communities

Communities hold sensitive knowledge — cultural, legal, medical, governance. When an elder asks an AI about case law for community court, that question shouldn't end up on OpenAI's servers in San Francisco.

Local AI keeps knowledge work on country.

Cloud AI (ChatGPT, etc.)

Local AI

What It Can Do

  1. Answer questions. Ask it anything — law, health, history, governance, language. It draws from the knowledge it was trained on. No internet needed. Responses in 2–5 seconds.
  2. Draft documents. Court records, meeting minutes, correspondence, applications. Give it a rough idea and it writes a draft. You edit and approve.
  3. Translate and explain. Simplify legal language. Explain complex policy documents. Translate between languages. Make government paperwork readable.
  4. Process records. Summarise long documents. Extract key points from meeting recordings. Organise and search community archives.
  5. Learn your context. Fine-tune the model on your community's documents, protocols, and knowledge — so it understands your governance, not just generic Australian law.

What It Looks Like

┌──────────────────────────────────────┐ │ Community Office │ │ │ │ 📦 One box under a desk │ │ └── 128GB RAM, GPU, 1TB storage │ │ └── Runs 7 billion parameter model │ │ └── Answers questions in 2-5 sec │ │ └── No internet required │ │ │ │ 📱 📱 📱 Phones/laptops connect │ │ via local WiFi or mesh │ │ │ │ 🔒 Nothing leaves this building │ └──────────────────────────────────────┘

It's a box. About the size of a thick book. It sits under a desk or on a shelf. People connect to it from their phones or laptops over local WiFi. They type a question, they get an answer. The question and the answer never leave the building.

The Numbers

2-5s Response time with GPU acceleration. Feels like a conversation.
$0 Monthly cost. No subscription. Hardware is a one-time purchase.
7B+ Parameters. These models are genuinely useful — drafting, analysis, Q&A.
0 Data sent to the cloud. Ever. Nothing leaves the room.

What It Costs

SetupOption A: BudgetOption B: Recommended
Computer$800 (used workstation, 64GB RAM)$1,500 (128GB RAM, RTX GPU)
GPU (for fast responses)$350 (used RTX 3090)Included above
Storage$60 (1TB SSD)Included above
AI models$0 (open source)$0 (open source)
Software$0 (open source)$0 (open source)
Monthly cost$0$0
Electricity~$5-10/month~$5-10/month
Total~$1,200 one-time~$1,500 one-time
Compare: 10 people using ChatGPT Plus = $200/month = $2,400/year. Local AI pays for itself in 6 months. After that, it's free forever.

How It Connects to the Mesh

Local AI runs on VexConnect. Community members access it through the mesh — no internet needed. The AI box is just another node on the community's own network.

📱 ─ BLE ─ 📱 ─ WiFi ─ 📦 AI Box │ ├── "What does the Crimes Act say about..." ├── "Draft a letter to the council about..." └── "Summarise these meeting notes..." All on your network. Nothing leaves.

What's Running Today

We have a working system. 128GB RAM, GPU-accelerated inference, multiple AI models loaded and running. This isn't theoretical — it's answering questions right now.

Hardware built and running. 128GB RAM, RTX 3070 GPU, 24 CPU cores.
Multiple models loaded. 7B to 24B parameters. Coding, reasoning, vision.
GPU-accelerated inference. 2-5 second responses for 7B models.
Web interface for easy access from any device on the network.

Community Fine-Tuning

The most powerful feature: you can teach the AI your community's knowledge.

Feed it your governance documents, court precedents, cultural protocols, meeting minutes. The model adapts to understand your context. When someone asks a question, it answers with your community's knowledge, not just generic information.

This knowledge stays on your hardware. It's never uploaded, never shared, never used to train someone else's model.