Run ChatGPT-like AI on your own computer! LocalLab is a complete AI platform that runs models locally with a powerful chat interface and Python client.
LocalLab gives you your own personal ChatGPT that runs entirely on your computer:
- 🎯 Terminal Chat Interface - ChatGPT-like experience in your terminal
- 🔒 Complete Privacy - Your data never leaves your computer
- 💰 Zero Cost - No monthly fees or API charges
- 🌐 Access Anywhere - Use from any device with ngrok tunneling
- ⚡ Multiple Models - Support for various open-source AI models
- 🤖 Model Management - Download, organize, and manage AI models locally
- 🎮 Free GPU - Run on Google Colab for free GPU acceleration
Perfect for developers, students, researchers, or anyone who wants to experiment with AI without privacy concerns or ongoing costs.
# 1. Install LocalLab
pip install locallab locallab-client
# 2. Start your AI server
locallab start
# 3. Chat with your AI
locallab chat
That's it! You now have your own ChatGPT running locally.
Want to download models ahead of time or manage your local AI models? LocalLab includes powerful model management:
# Discover available models
locallab models discover
# Download a model locally (faster startup)
locallab models download microsoft/phi-2
# List your cached models
locallab models list
# Get detailed model information
locallab models info microsoft/phi-2
📖 Learn More: See the Model Management Guide for complete documentation.
LocalLab has three main components:
- Runs AI models on your computer
- Provides a web API for interactions
- Handles model loading and optimization
- Start with:
locallab start
- Terminal-based ChatGPT-like experience
- Real-time streaming responses
- Multiple generation modes
- Access with:
locallab chat
- Download and organize AI models locally
- Discover available models from HuggingFace Hub
- Manage disk space and cache cleanup
- Use with:
locallab models
- Programmatic access for your code
- Both sync and async support
- Use with:
client = SyncLocalLabClient("http://localhost:8000")
graph TD
A[Terminal Chat] -->|Uses| C[LocalLab Server]
B[Python Code] -->|Uses| C
C -->|Runs| D[AI Models]
C -->|Optional| E[Ngrok Tunnel]
E -->|Access from| F[Any Device]
style C fill:#2563eb,stroke:#1e40af,stroke-width:2px,color:#ffffff
style D fill:#059669,stroke:#047857,stroke-width:2px,color:#ffffff
style A fill:#7c3aed,stroke:#6d28d9,stroke-width:2px,color:#ffffff
style B fill:#dc2626,stroke:#b91c1c,stroke-width:2px,color:#ffffff
style E fill:#ea580c,stroke:#c2410c,stroke-width:2px,color:#ffffff
style F fill:#0891b2,stroke:#0e7490,stroke-width:2px,color:#ffffff
🌟 The Magic: Use --use-ngrok
to access your AI from anywhere - your phone, another computer, or share with friends!
📦 Easy Setup 🔒 Privacy First 🎮 Free GPU Access
🤖 Multiple Models 💾 Memory Efficient 🔄 Auto-Optimization
🗂️ Model Management ⚡ Fast Response 🔧 Simple Server
🌐 Local or Colab 🔌 Client Package 🛡️ Secure Tunneling
🌍 Access Anywhere 📥 Offline Models 🧹 Cache Cleanup
Two-Part System:
- LocalLab Server: Runs the AI models and exposes API endpoints
- LocalLab Client: A separate Python package (
pip install locallab-client
) that connects to the server
Access From Anywhere: With built-in ngrok integration, you can securely access your LocalLab server from any device, anywhere in the world - perfect for teams, remote work, or accessing your models on the go.
-
On Your Computer (Local Mode)
💻 Your Computer └── 🚀 LocalLab Server └── 🤖 AI Model └── 🔧 Auto-optimization
-
On Google Colab (Free GPU Mode)
☁️ Google Colab └── 🎮 Free GPU └── 🚀 LocalLab Server └── 🤖 AI Model └── ⚡ GPU Acceleration
Latest Package Versions:
-
Install Required Build Tools
- Install Microsoft C++ Build Tools
- Select "Desktop development with C++"
- Install CMake
- Add to PATH during installation
- Install Microsoft C++ Build Tools
-
Install Packages
pip install locallab locallab-client
-
Verify PATH
- If
locallab
command isn't found, add Python Scripts to PATH:# Find Python location where python # This will show something like: C:\Users\YourName\AppData\Local\Programs\Python\Python311\python.exe
Adding to PATH in Windows:
- Press
Win + X
and select "System" - Click "Advanced system settings" on the right
- Click "Environment Variables" button
- Under "System variables", find and select "Path", then click "Edit"
- Click "New" and add your Python Scripts path (e.g.,
C:\Users\YourName\AppData\Local\Programs\Python\Python311\Scripts\
) - Click "OK" on all dialogs
- Restart your command prompt
- Alternatively, use:
python -m locallab start
- If
🔍 Having issues? See our Windows Troubleshooting Guide
# Install both server and client packages
pip install locallab locallab-client
# Run interactive configuration
locallab config
# This will help you set up:
# - Model selection
# - Memory optimizations
# - GPU settings
# - System resources
# Start with saved configuration
locallab start
# Or start with specific options
locallab start --model microsoft/phi-2 --quantize --quantize-type int8
The LocalLab Chat Interface is a powerful terminal-based tool that gives you a ChatGPT-like experience right in your command line. It's the easiest way to interact with your AI models.
- Instant AI Access - No coding required, just type and chat
- Real-time Responses - See AI responses as they're generated
- Rich Formatting - Markdown rendering with syntax highlighting
- Smart Features - History, saving, batch processing, and more
- Works Everywhere - Local, remote, or Google Colab
# Start your server
locallab start
# Open chat interface
locallab chat
Feature | Description | Example |
---|---|---|
Dynamic Mode Switching | Change generation mode per message | Explain AI --stream |
Real-time Streaming | See responses as they're typed | Live text generation |
Conversation History | Track and save your chats | /history , /save |
Batch Processing | Process multiple prompts | /batch command |
Remote Access | Connect to any LocalLab server | --url https://your-server.com |
Error Recovery | Auto-reconnection and graceful handling | Seamless experience |
/help # Show all available commands
/history # View conversation history
/save # Save current conversation
/batch # Enter batch processing mode
/reset # Clear conversation history
/exit # Exit gracefully
Override the default generation mode for any message:
You: Write a story --stream # Use streaming mode
🔄 Using stream mode for this message
You: Remember my name is Alice --chat # Use chat mode with context
🔄 Using chat mode for this message
You: What's 2+2? --simple # Use simple mode
🔄 Using simple mode for this message
$ locallab chat
🚀 LocalLab Chat Interface
✅ Connected to: http://localhost:8000
📊 Server: LocalLab v0.9.0 | Model: qwen-0.5b
You: Hello! Can you help me with Python?
AI: Hello! I'd be happy to help you with Python programming.
What specific topic would you like to explore?
You: Show me how to create a class --stream
AI: Here's how to create a simple class in Python:
```python
class Person:
def __init__(self, name, age):
self.name = name
self.age = age
def introduce(self):
return f"Hi, I'm {self.name} and I'm {self.age} years old."
# Usage
person = Person("Alice", 25)
print(person.introduce())
You: /save
💾 Conversation saved to: chat_2024-07-06_14-30-15.json
You: /exit
👋 Goodbye!
Connect to any LocalLab server from anywhere:
# Connect to remote server
locallab chat --url https://abc123.ngrok.app
# Use with Google Colab
locallab chat --url https://your-colab-ngrok-url.app
📖 Complete Guide: See the Chat Interface Documentation for advanced features, examples, and troubleshooting.
For developers who want to integrate AI into their applications, LocalLab provides a powerful Python client package.
Method | Best For | Getting Started |
---|---|---|
Chat Interface | Interactive use, testing, quick questions | locallab chat |
Python Client | Applications, scripts, automation | from locallab_client import SyncLocalLabClient |
from locallab_client import SyncLocalLabClient
# Connect to server - choose ONE of these options:
# 1. For local server (default)
client = SyncLocalLabClient("http://localhost:8000")
# 2. For remote server via ngrok (when using Google Colab or --use-ngrok)
# client = SyncLocalLabClient("https://abc123.ngrok.app") # Replace with your ngrok URL
try:
print("Generating text...")
# Generate text
response = client.generate("Write a story")
print(response)
print("Streaming responses...")
# Stream responses
for token in client.stream_generate("Tell me a story"):
print(token, end="", flush=True)
print("Chat responses...")
# Chat with AI
response = client.chat([
{"role": "system", "content": "You are helpful."},
{"role": "user", "content": "Hello!"}
])
print(response.choices[0]["message"]["content"])
finally:
# Always close the client
client.close()
💡 Important: When connecting to a server running on Google Colab or with ngrok enabled, always use the ngrok URL (https://abc123.ngrok.app) that was displayed when you started the server.
import asyncio
from locallab_client import LocalLabClient
async def main():
# Connect to server - choose ONE of these options:
# 1. For local server (default)
client = LocalLabClient("http://localhost:8000")
# 2. For remote server via ngrok (when using Google Colab or --use-ngrok)
# client = LocalLabClient("https://abc123.ngrok.app") # Replace with your ngrok URL
try:
print("Generating text...")
# Generate text
response = await client.generate("Write a story")
print(response)
print("Streaming responses...")
# Stream responses
async for token in client.stream_generate("Tell me a story"):
print(token, end="", flush=True)
print("\nChatting with AI...")
# Chat with AI
response = await client.chat([
{"role": "system", "content": "You are helpful."},
{"role": "user", "content": "Hello!"}
])
# Extracting Content
content = response['choices'][0]['message']['content']
print(content)
finally:
# Always close the client
await client.close()
# Run the async function
asyncio.run(main())
First, you'll set up the LocalLab server on Google Colab to use their free GPU:
# In your Colab notebook:
# 1. Install the server package
!pip install locallab
# 2. Configure with CLI (notice the ! prefix)
!locallab config
# 3. Start server with ngrok for remote access
!locallab start --use-ngrok
# The server will display a public URL like:
# 🚀 Ngrok Public URL: https://abc123.ngrok.app
# COPY THIS URL - you'll need it to connect!
After setting up your server on Google Colab, you'll need to connect to it using the LocalLab client package. The server will display a ngrok URL that you'll use for the connection.
You can now use the client connection examples from the Client Connection & Usage section above.
Just make sure to:
- Use your ngrok URL instead of localhost
- Install the client package if needed
For example:
# In another cell in the same Colab notebook:
# 1. Install the client package
!pip install locallab-client
# 2. Import the client
from locallab_client import SyncLocalLabClient
# 3. Connect to your ngrok URL (replace with your actual URL from Step 1)
client = SyncLocalLabClient("https://abc123.ngrok.app") # ← REPLACE THIS with your URL!
# 4. Now you can use any of the client methods
response = client.generate("Write a poem about AI")
print(response)
# 5. Always close when done
client.close()
The power of using ngrok is that you can connect to your Colab server from anywhere:
# On your local computer, phone, or any device with Python:
pip install locallab-client
from locallab_client import SyncLocalLabClient
client = SyncLocalLabClient("https://abc123.ngrok.app") # ← REPLACE THIS with your URL!
response = client.generate("Hello from my device!")
print(response)
client.close()
💡 Remote Access Tip: The ngrok URL lets you access your LocalLab server from any device - your phone, tablet, another computer, or share with teammates. See the Client Connection & Usage section above for more examples of what you can do with the client.
- Python 3.8+
- 4GB RAM minimum (8GB+ recommended)
- GPU optional but recommended
- Internet connection for downloading models
- Just a Google account!
- Free tier works fine
- Easy Setup: Just pip install and run
- Multiple Models: Use any Hugging Face model
- Resource Efficient: Automatic optimization
- Privacy First: All local, no data sent to cloud
- Free GPU: Google Colab integration
- Flexible Client API: Both async and sync clients available
- Automatic Resource Management: Sessions close automatically
- Remote Access: Access your models from anywhere with ngrok integration
- Secure Tunneling: Share your models securely with teammates or access from mobile devices
- Client Libraries: Python libraries for both synchronous and asynchronous usage
graph LR
A[Your Application] -->|Uses| B[LocalLab Client]
B -->|API Requests| C[LocalLab Server]
C -->|Runs| D[AI Models]
C -->|Optional| E[Ngrok Tunnel]
E -->|Remote Access| F[Any Device, Anywhere]
style A fill:#7c3aed,stroke:#6d28d9,stroke-width:2px,color:#ffffff
style B fill:#dc2626,stroke:#b91c1c,stroke-width:2px,color:#ffffff
style C fill:#2563eb,stroke:#1e40af,stroke-width:2px,color:#ffffff
style D fill:#059669,stroke:#047857,stroke-width:2px,color:#ffffff
style E fill:#ea580c,stroke:#c2410c,stroke-width:2px,color:#ffffff
style F fill:#0891b2,stroke:#0e7490,stroke-width:2px,color:#ffffff
Guide | Description |
---|---|
Installation & Setup | Complete installation guide for all platforms |
CLI Overview | Command-line interface documentation |
Chat Interface | Terminal chat features and examples |
Guide | Description |
---|---|
CLI Reference | Complete command documentation |
Model Management | Download and organize AI models |
Python Client | Programmatic access guide |
API Reference | HTTP API documentation |
Guide | Description |
---|---|
Google Colab Setup | Free GPU deployment guide |
Troubleshooting | Common issues and solutions |
Advanced Features | Power user features |
- Check FAQ
- Visit Troubleshooting
- Ask in Discussions
If you find LocalLab helpful, please star our repository! It helps others discover the project.