The author has built a personal AI assistant that runs entirely on a user's computer, with voice support, document understanding, and memory, without relying on the cloud or API keys. The assistant uses local models with Ollama and can understand uploaded documents, remember conversations, and respond with voice output. It has a customizable personality through a simple UI and runs inside a sleek Streamlit interface. The tech stack includes LangChain, Python, Ollama, FAISS, PyPDFLoader, TextLoader, SpeechRecognition, pyttsx3, and Streamlit. A demo of the assistant is available, and it can be set up using the open-source code on GitHub. The assistant saves conversations to disk as JSON and can retrieve relevant context from uploaded documents using FAISS. It also supports voice input and output, and users can modify its tone with a system prompt. The author learned how to build a full offline AI assistant from scratch, integrate speech recognition and TTS, and handle multi-turn memory using LangChain. Future features include multi-file upload support, document summarization, conversation export, and LAN deployment. The project is fully open-source, and users can try it out by installing Python, Ollama, and a model, and setting up the code from the GitHub repo.
dev.to
dev.to
