sllm.nvim is a Neovim plugin that integrates Simon Willison’s llm
CLI directly into your editor.
Chat with large language models, stream responses in a scratch buffer, manage context files, switch models or tool integrations on the fly, and control everything asynchronously without leaving Neovim.
The landscape of AI plugins for Neovim is growing. To understand the philosophy behind sllm.nvim
and see how it compares to other popular plugins, please read the PREFACE.md.
llm
models and tools interactively and add selected tools to your context.show_usage
is enabled).Install the llm
CLI
Follow instructions at https://github.com/simonw/llm
e.g. brew install llm
or pip install llm
.
đź’ˇ If
llm
is not in your system'sPATH
, you can set the full path in the configuration via thellm_cmd
option.
Install one or more llm
extensions
llm install llm-openai
llm install llm-openrouter
llm install llm-gpt4all
…or any other plugin supported by llm
.đź’ˇ The
llm-openrouter
extension gives access to over 300 models (some free) via OpenRouter.See all available LLM plugins for the
llm
CLI at llm.datasette.io/plugins/directory.
Configure your API key(s)
llm keys set openai
# or for other services
llm keys set openrouter
or set environment variables like OPENAI_API_KEY
.
{
"mozanunal/sllm.nvim",
dependencies = {
"echasnovski/mini.notify",
"echasnovski/mini.pick",
},
config = function()
require("sllm").setup({
-- your custom options here
})
end,
}
use({
"mozanunal/sllm.nvim",
requires = { "echasnovski/mini.notify", "echasnovski/mini.pick" },
config = function()
require("sllm").setup({
-- your custom options here
})
end,
})
Call require("sllm").setup()
with an optional table of overrides:
require("sllm").setup({
llm_cmd = "llm", -- command or path for the llm CLI
default_model = "gpt-4.1", -- default llm model (set to "default" to use llm's default model)
show_usage = true, -- append usage stats to responses
on_start_new_chat = true, -- start fresh chat on setup
reset_ctx_each_prompt = true, -- clear file context each ask
window_type = "vertical", -- Default. Options: "vertical", "horizontal", "float"
-- function for item selection (like vim.ui.select)
pick_func = require("mini.pick").ui_select,
-- function for notifications (like vim.notify)
notify_func = require("mini.notify").make_notify(),
-- function for inputs (like vim.ui.input)
input_func = vim.ui.input,
-- See the "Customizing Keymaps" section for more details
keymaps = {
-- Change a default keymap
ask_llm = "<leader>a",
-- Disable a default keymap
add_url_to_ctx = false,
-- Other keymaps will use their default values
},
})
Option | Type | Default | Description |
---|---|---|---|
llm_cmd |
string | "llm" |
Command or path for the llm CLI tool. |
default_model |
string | "gpt-4.1" |
Model to use on startup |
show_usage |
boolean | true |
Include token usage summary in responses. If true , you'll see details after each interaction. |
on_start_new_chat |
boolean | true |
Begin with a fresh chat buffer on plugin setup |
reset_ctx_each_prompt |
boolean | true |
Automatically clear file context after every prompt (if true ) |
window_type |
string | "vertical" |
Window style: "vertical" , "horizontal" , or "float" . |
pick_func |
function | require('mini.pick').ui_select |
UI function for interactive model selection |
notify_func |
function | require('mini.notify').make_notify() |
Notification function |
input_func |
function | vim.ui.input |
Input prompt function. |
keymaps |
table/false | (see defaults) | A table of keybindings. Set any key to false or nil to disable it. Set the whole keymaps option to false to disable all defaults. |
The following table lists the default keybindings. All of them can be changed or disabled in your setup
configuration (see Customizing Keymaps).
Keymap | Mode | Action |
---|---|---|
<leader>ss |
n,v | Prompt the LLM with an input box |
<leader>sn |
n,v | Start a new chat (clears buffer) |
<leader>sc |
n,v | Cancel current request |
<leader>sf |
n,v | Focus the LLM output buffer |
<leader>st |
n,v | Toggle LLM buffer visibility |
<leader>sm |
n,v | Pick a different LLM model |
<leader>sa |
n,v | Add current file to context |
<leader>su |
n,v | Add content of a URL to context |
<leader>sv |
v | Add visual selection to context |
<leader>sd |
n,v | Add diagnostics to context |
<leader>sx |
n,v | Add shell command output to context |
<leader>sT |
n,v | Add an installed tool to context |
<leader>sF |
n,v | Add Python function from buffer/selection as a tool |
<leader>sr |
n,v | Reset/clear all context files |
You have full control over the keybindings. Here are the common scenarios:
If you are happy with the default keymaps, you don't need to pass a keymaps
table at all. Just call setup()
with no arguments or with other options.
To override specific keymaps, provide your new binding. To disable a keymap you don't use, set its value to false
or nil
. Any keymaps you don't specify will keep their default values.
-- In your setup() call:
require("sllm").setup({
keymaps = {
-- CHANGE: Use <leader>a for asking the LLM instead of <leader>ss
ask_llm = "<leader>a",
-- DISABLE: I don't use the "add URL" or "add tool" features
add_url_to_ctx = false,
add_tool_to_ctx = nil, -- `nil` also works for disabling
},
})
If you prefer to set up all keybindings manually, you can disable all defaults by passing false
or an empty table {}
.
-- In your setup() call:
require("sllm").setup({
keymaps = false,
})
-- Now you can define your own from scratch
local sllm = require("sllm")
vim.keymap.set({"n", "v"}, "<leader>a", sllm.ask_llm, { desc = "Ask LLM [custom]" })
<leader>ss
; type your prompt and hit Enter.<leader>sa
.<leader>sv
.<leader>sd
.<leader>su
.<leader>sx
.<leader>sT
, then pick from the list.<leader>sF
(use visual mode for a selection, or normal mode for the whole file).<leader>sr
.<leader>sm
.<leader>sc
.sllm.context_manager
)
Tracks a list of file paths, text snippets, tool names, and function definitions to include in subsequent prompts.sllm.backend.llm
)
Builds and executes the llm
CLI command, optionally adding -T <tool>
for each active tool or --functions <py_function>
for ad-hoc functions.sllm.job_manager
)
Spawns a Neovim job for the CLI, streams stdout line-by-line.sllm.ui
)
Creates and manages a scratch markdown buffer to display streaming output.sllm.utils
)
Helper functions for buffer/window checks, path utilities, and more.llm
CLI.sllm.nvim
itself is created and maintained by mozanunal, focusing on integrating these tools smoothly into Neovim.Apache 2.0 — see LICENSE.
llm
and its extensions are copyright Simon Willison.