A Neovim file picker that learns how you navigate. A neural network trains on your file selections to rank results by what you're most likely to open next, combining fuzzy matching with contextual signals like buffer state, directory proximity, frecency, and file-to-file transition patterns.
Inspired by smart-open.nvim, built for Snacks.nvim.
debug.preview to watch the model's score breakdowns and training in real time[!NOTE] Neural Open ships with pre-trained default weights so it's useful immediately, but the network is trained on one person's workflow. Give it a couple of days of normal use and it will start adapting to your specific navigation patterns.
Here's the debug preview showing the neural network's score breakdown and training metrics as you navigate:
Using lazy.nvim:
{
"dtormoen/neural-open.nvim",
dependencies = {
"folke/snacks.nvim",
},
-- NeuralOpen implements lazy loading internally. It needs to be loaded for recency tracking to work.
lazy=false;
keys = {
{ "<leader><leader>", "<Plug>(NeuralOpen)", desc = "Neural Open Files" },
},
-- opts are optional. NeuralOpen will automatically use the defaults below.
opts = {},
}
require("neural-open").setup({
-- Scoring algorithm: "nn" (neural network), "classic" (weighted features), or "naive"
algorithm = "nn",
-- Algorithm-specific configurations
algorithm_config = {
-- Neural network algorithm settings (default)
nn = {
architecture = { 11, 16, 16, 8, 1 }, -- Input → Hidden1 → Hidden2 → Hidden3 → Output
optimizer = "adamw", -- "sgd" or "adamw"
learning_rate = 0.001, -- Learning rate for gradient descent
batch_size = 128, -- Number of samples per training batch
history_size = 2000, -- Maximum stored historical selections
batches_per_update = 5, -- Number of batches per weight update
weight_decay = 0.0001, -- L2 regularization to prevent overfitting
layer_decay_multipliers = nil, -- Optional per-layer decay rates
dropout_rates = { 0, 0.25, 0.25 }, -- Dropout rates for hidden layers
warmup_steps = 10, -- Learning rate warmup steps (recommended for AdamW)
warmup_start_factor = 0.1, -- Start at 10% of learning rate
adam_beta1 = 0.9, -- AdamW first moment decay
adam_beta2 = 0.999, -- AdamW second moment decay
adam_epsilon = 1e-8, -- AdamW numerical stability
match_dropout = 0.25, -- Dropout rate for match/virtual_name during training
margin = 1.0, -- Margin for pairwise hinge loss
},
-- Classic algorithm settings (weighted feature scoring)
classic = {
learning_rate = 0.6, -- Learning rate for weight adjustments (0.0 to 1.0)
default_weights = {
match = 140, -- Snacks fuzzy matching
virtual_name = 131, -- Virtual name matching
open = 3, -- Open buffer bonus
alt = 4, -- Alternate buffer bonus
proximity = 13, -- Directory proximity
project = 10, -- Project (cwd) bonus
frecency = 17, -- Frecency score
recency = 9, -- Recency score
trigram = 10, -- Trigram similarity
transition = 5, -- File transition tracking
not_current = 5, -- Not-current-file indicator
},
},
naive = {
-- No configuration needed
},
},
-- Path to JSON file storing learned weights
weights_path = vim.fn.stdpath("data") .. "/neural-open/weights.json",
-- Maximum number of files in persistent recency list
recency_list_size = 100,
-- Debug settings
debug = {
preview = false, -- Show detailed score breakdown in preview
snacks_scores = false, -- Show Snacks.nvim debug scores in picker
latency = false, -- Enable detailed latency tracking
latency_file = nil, -- Optional file path for persistent latency logging
latency_threshold_ms = 100, -- Only log operations exceeding this duration
latency_auto_clipboard = false, -- Copy timing report to clipboard
},
-- Special files that include parent directory in virtual name
special_files = {
["__init__.py"] = true,
["index.js"] = true,
["index.jsx"] = true,
["index.ts"] = true,
["index.tsx"] = true,
["init.lua"] = true,
["init.vim"] = true,
["mod.rs"] = true,
},
})
-- Via command
:NeuralOpen
-- Via Lua
require("neural-open").open()
-- With custom options
require("neural-open").open({
cwd = "/path/to/project",
prompt = "Neural Open> ",
})
-- Using Snacks.nvim directly
require("snacks").picker.pick("neural_open")
Change the scoring algorithm at runtime:
" Show current algorithm
:NeuralOpen algorithm
" Switch to neural network algorithm (default)
:NeuralOpen algorithm nn
" Switch to classic weighted algorithm
:NeuralOpen algorithm classic
" Switch to naive algorithm
:NeuralOpen algorithm naive
If the learned weights aren't producing good results, reset them:
require("neural-open").reset_weights()
Or via command:
" Reset current algorithm's weights
:NeuralOpen reset
" Reset specific algorithm's weights
:NeuralOpen reset nn
:NeuralOpen reset classic
For each file in the picker, the plugin computes a set of normalized features capturing context like fuzzy match quality, buffer state, directory proximity, and usage history. These features are fed into one of three scoring algorithms to produce a final ranking. All algorithms learn from your file selections and persist their parameters to disk.
Each file receives a score based on 11 features, all normalized to [0,1]:
components/index.js matches "components"), normalized with a sigmoid# file)1 - 1/(1 + x/8)(max - rank + 1) / max1 - 1/(1 + score/4)A multi-layer perceptron that takes the 11 normalized features as input and outputs a ranking score. Trained online using pairwise hinge loss: when you select a file, the network learns from (selected, non-selected) pairs constructed from the top-ranked candidates. Uses batch normalization and Leaky ReLU activations during training; at inference time, batch normalization is fused into the weight matrices so scoring runs with zero allocations per keystroke. Match/virtual_name features are randomly dropped during training to force the network to learn from contextual features (frecency, proximity, etc.), improving ranking before any query is typed. Supports AdamW (default) and SGD optimizers with optional learning rate warmup.
Based on smart-open.nvim's ranking approach, adapted for Snacks.nvim, and extended with trigram and transition features. Computes a weighted sum of the normalized features. When you select a file that wasn't ranked #1, the algorithm compares feature values between your selection and higher-ranked items and adjusts the weights using a configurable learning rate.
Simple unweighted sum of all normalized features. No learning. Useful for testing and as a baseline.
setup(opts) - Initialize the plugin with configurationopen(opts) - Open the neural pickerreset_weights(algorithm_name?) - Reset learned weights to defaults (optional algorithm name)set_algorithm(name?) - Set or display current algorithm ("classic", "naive", "nn")<Plug>(NeuralOpen) - Open the neural picker:NeuralOpen - Open the neural picker:NeuralOpen algorithm [name] - Show or set scoring algorithm:NeuralOpen reset [algorithm] - Reset weights for current or specified algorithmThe plugin registers as neural_open source in Snacks.nvim:
require("snacks").picker.sources.neural_open
# Install dependencies
just setup
# Run tests (isolated from your real Neovim environment)
just test
Tests run in complete isolation using temporary XDG directories to protect your real configuration.
# Run scorer hot-path benchmark (static features, per-keystroke scoring, transform phase across 1K/10K/100K files)
just benchmark
Results are documented in docs/benchmark-results.md.
Contributions are welcome! Please feel free to submit issues and pull requests.
MIT License - See LICENSE file for details