TL;DR
- Private, on-device search across your screenshots, memes, and photos.
- OCR + fuzzy text search today; hybrid semantic search next.
- Nothing ever leaves your device. Fast after initial indexing.
Links
Why
Finding “that one screenshot” is annoying. Memeory lets you type what you remember and instantly jump to the right image, completely offline.
Highlights
- Fully local pipeline: scan → OCR → index → search. No servers, no tracking.
- Fuzzy text search over OCR output (handles typos and partial matches).
- Share or open the matched image directly from results.
How it works (v1)
- Permissioned gallery scan with content URI access.
- OCR via ML Kit → automatic caption generation
- Fuzzy search (Fuse.js) over all captured text.
Roadmap (v1.1+)
- Hybrid search: text + semantic embeddings with an ANN index.
- Model: small image encoder (quantized SigLIP or similar)
- Prototyping integration with ExecuTorch/Expo
Privacy
- Fully offline. No external calls; no analytics by default.
Tech
- React Native (Expo), TypeScript
- ML Kit Text Recognition
- ExecuTorch (for embeddings, v1.1, in development)
What I learned
- Android media pipeline quirks, content URIs, and batching I/O for speed.
- OCR normalization matters more than model size for real-world recall.
- Fuzzy search + OCR gets you 80% of the way; semantic adds the “vibes.”