This is an iteration on the window features of jackMort/ChatGPT.nvim. It's an AI plugin designed to tighten your neovim workflow with AI LLMs. It offers two windows: one for chat, and one for code editing.
The plugin is developed with a focus on stability and user experience. The plugin code itself is well tested and leverages the lua type system, which makes it more robust.
This plugin offers two commands:
The code window (:GPTModelsCode) is great for iterating on a selection of code. You have three panes - the left pane holds code you're working on, the right pane code only responses from the LLMs, and then you have an input for your prompt. If you call it with a visual selection, that selection will be placed into the left pane. If your visual selection has LSP diagnostics, they will be placed into the input pane. From there you can iterate on the prompt and ultimately the code. Press Ctrl-x to replace the left pane with the contents of the right, rinse and repeat. In this window, code is the main focus. I use this whenever I have an AI request to modify some code I'm working on. The prompt behind the scenes is tuned so the AI only responds with code (although prompts are hard to get perfect, so be lenient.) Other features, including changing models and including files, are labeled on the window.
The chat window is great for having a free form conversation. You can still open with a selection, include files, cancel requests, etc.
The keymaps work in normal mode within the popup windows.
The keymaps strive to be labaled and easily visible from within the plugin so you don't need to reference them here. But here they are anyways:
Keybinding | Action | Description |
---|---|---|
<CR> |
send request | pressing enter sends your prompt and any files or code selected to the llm |
q |
quit | close the window |
[S]Tab |
cycle windows | switch focus into each window successively |
C-c |
cancel request | send SIGTERM to the curl command making the fetch |
C-f |
add files | open the file picker window and include file contents in your request |
C-g |
clear files | clear the files you have selected, leaving the windows be |
C-x |
xfer to deck | in the code window, transfer the right pane's contents to the left |
C-j/k |
cycle models | cycle through which LLM model to use for further requests |
C-p |
pick model | open a popup window to pick a model. Useful when you have many models |
C-n |
clear all | clear the whole state, all the windows and files |
This plugin requires curl
be installed for requests.
For Ollama requests, have Ollama running locally.
For OpenAI requests, have the OPENAI_API_KEY
environment variable set.
Now, in your favorite package manager:
lazy:
{
"Aaronik/GPTModels.nvim",
dependencies = {
"MunifTanjim/nui.nvim",
"nvim-telescope/telescope.nvim"
}
}
Plug:
Plug "MunifTanjim/nui.nvim"
Plug "nvim-telescope/telescope.nvim"
Plug "Aaronik/GPTModels.nvim"
(Contributions to this readme for how to do other package managers are welcome)
Here are some examples of how to map these -- this is what I use, <leader>a
for the code window and <leader>c
for the chat
in init.lua
:
-- Both visual and normal mode for each, so you can open with a visual selection or without.
vim.api.nvim_set_keymap('v', '<leader>a', ':GPTModelsCode<CR>', { noremap = true })
vim.api.nvim_set_keymap('n', '<leader>a', ':GPTModelsCode<CR>', { noremap = true })
vim.api.nvim_set_keymap('v', '<leader>c', ':GPTModelsChat<CR>', { noremap = true })
vim.api.nvim_set_keymap('n', '<leader>c', ':GPTModelsChat<CR>', { noremap = true })
in .vimrc
:
" Or if you prefer the traditional way
nnoremap <leader>a :GPTModelsCode<CR>
vnoremap <leader>a :GPTModelsCode<CR>
nnoremap <leader>c :GPTModelsChat<CR>
vnoremap <leader>c :GPTModelsChat<CR>
Big thanks to @jackMort for the inspiration for the code window. I used jackMort/ChatGPT.nvim for a long time before deciding to write this plugin.