A sleek local notepad app powered by Ollama. Write notes, chat with open-source language models, and tweak generation settings. Built with a minimal Tkinter-based GUI. The goal of the project is to be an accessible, light weight version of open-webui capabilities available via a notepad app.
Tiny Notepad is a Python desktop app that allows you to:
- Write and save timestamped notes
- Load previous notes from a sidebar
- Interact with any locally installed Ollama model
- Adjust LLM generation parameters like temperature, top_p, top_k, etc.
Ideal for journaling, note-taking, or exploring how LLMs respond to prompts — all running entirely on your machine.
-
Clean Tkinter GUI
-
Local-first: works offline with Ollama
-
Sidebar to browse saved notes
-
Real-time interaction with models (via Ollama API)
-
Editable generation settings:
temperaturetop_ptop_krepeat_penaltypresence_penaltyfrequency_penaltystopsequences
-
Install Python (>=3.7)
-
Install Tiny Notepad
pip install tiny-notepad-
Install
ollama(required) Download Ollama and follow platform-specific installation instructions. -
Run a model via Ollama
Before using the app, make sure a model is pulled and running:
ollama pull llama3
ollama run llama3OR
ollama serveOnce installed via pip, launch the app using:
tiny-notepadThe app will auto-check for Ollama and attempt to start
ollama serveif it's not already running.
All notes are saved in a notes/ directory, auto-named like:
notes_2025-05-18_14-33-21.txt
You can browse and reopen them from the built-in sidebar.
| Parameter | Description | Default |
|---|---|---|
temperature |
Controls randomness (0 = deterministic, 2 = wild) | 0.7 |
top_p |
Nucleus sampling (probability threshold) | 0.9 |
top_k |
Limits to top-k tokens | 40 |
repeat_penalty |
Penalizes repeated tokens | 1.0 |
presence_penalty |
Encourages new topic exploration | 0.0 |
frequency_penalty |
Penalizes frequent tokens | 0.0 |
stop |
Comma-separated stop sequences to end generation | (custom) |
All parameters can be adjusted in the GUI in real time.
Pull and try other models using Ollama:
ollama pull mistral
ollama pull gemmaThey will appear in the model dropdown after restarting the app.
MIT License © 2025 Ronald Wilson