Chat with your 10-years-younger self.

Import your WhatsApp, Facebook, Instagram, Gmail, iMessage, Telegram, Twitter, or Discord history. A local AI learns your voice and lets you text past-you. No cloud. No API. Everything runs on your machine.

100% local no cloud no telemetry AGPLv3

How It Works

1

Import

Export your chat history from any platform. Drop the files into Pratibmb. Messages are parsed and stored in a local SQLite database on your machine.

2

Embed

A local embedding model (84MB) creates semantic representations of your messages. This happens on your CPU/GPU. No text leaves your device.

3

Profile

A local LLM (2.3GB) analyzes your conversations to extract relationships, life events, interests, and communication style. All on-device.

4

Chat

Pick a year and start texting. Past-you responds in your voice, grounded in your real memories. Optional LoRA fine-tuning makes it even more authentic.

8 Platforms Supported

Export your history from any of these. Pratibmb auto-detects the format.

💬

WhatsApp

Export Chat as .txt

👥

Facebook

Download Your Information (JSON)

📷

Instagram

Download Your Information (JSON)

Gmail

Google Takeout (MBOX)

🗨

iMessage

macOS chat.db (SQLite)

Telegram

Desktop Export (JSON)

🐦

Twitter / X

Download Archive (.js)

🎮

Discord

DiscordChatExporter (JSON)

Download Pratibmb

Free. Open source. Runs offline after first model download (~2.5GB one-time).

macOS (Apple Silicon)

M1/M2/M3/M4 Macs. Native Metal acceleration for fine-tuning via MLX.

.dmg (arm64)

macOS (Intel)

2015-2020 Intel Macs. CPU-based inference.

.dmg (x64)
🐧

Linux

Ubuntu/Debian (.deb) or universal (.AppImage). CUDA GPU or CPU.

.deb (amd64) .AppImage
🏳

Windows

Windows 10/11 (64-bit). NVIDIA CUDA acceleration for fine-tuning.

.exe installer .msi

Requires Python 3.10+ and ~4GB RAM. Models are downloaded on first launch. Setup guide

Fine-Tune on Your Voice

Optional LoRA training teaches the model your texting style. Here's what each platform uses:

Platform Training Backend GPU Required? Time (~1500 pairs) Install
macOS (Apple Silicon) MLX-LM No (uses Metal) ~20 min pip install mlx-lm
Windows (NVIDIA GPU) PyTorch + PEFT + QLoRA Yes (6GB+ VRAM) ~30 min pip install "pratibmb[finetune-pytorch]"
Linux (NVIDIA GPU) PyTorch + PEFT + QLoRA Yes (6GB+ VRAM) ~30 min pip install "pratibmb[finetune-pytorch]"
Linux (CPU-only) PyTorch (no quantization) No ~2 hours pip install "pratibmb[finetune-pytorch]"

Fine-tuning is optional. The base model works well out of the box with profile extraction and RAG. Fine-tuning adds your specific texting style (abbreviations, language mix, emoji patterns).

Privacy by Architecture

Not just a policy. The code enforces it.

🔒

100% Local

Every computation runs on your hardware. Messages, embeddings, models, and profiles never leave your machine.

No Cloud

No OpenAI API, no cloud inference, no remote servers. Works with Wi-Fi off after the initial model download.

📈

No Telemetry

Zero analytics, no crash reports, no usage tracking, no phone-home. Check the source code yourself.

🛠

Open Source

AGPLv3 licensed. Every line is auditable. Don't trust us — verify. Community contributions welcome.

Built in the Open

Pratibmb is free software under the AGPLv3 license. The source code is on GitHub.

Your most intimate data — a decade of private messages — is the one dataset you should never upload to somebody else's API. That's why Pratibmb exists.