Transform Neovim into your AI conversation companion with a native interface to multiple AI providers.
🚀 Instant Integration | ⚡ Power Features | 🛠️ Developer Experience |
---|---|---|
Chat with multiple AI providers directly in your editor | Dynamic Lua templates in your prompts - embed code evaluation results | Markdown rendering in responses |
Support for Claude, OpenAI, and Google Vertex AI models | Import/export compatibility with Claude Workbench | Code block syntax highlighting |
Native Neovim feel with proper syntax highlighting and folding | Real-time token usage and cost tracking | Automatic buffer management |
Automatic API key management via system keyring | Message-based text objects and navigation | Customizable keymaps and styling |
@file references for embedding images, PDFs, and text |
Lualine component for model display |
Using your preferred package manager, for example with lazy.nvim:
{
"StanAngeloff/claudius.nvim",
opts = {},
}
Claudius requires:
file
command-line utility (for MIME type detection used by @file
references)ANTHROPIC_API_KEY
environment variable)OPENAI_API_KEY
environment variable)VERTEX_AI_ACCESS_TOKEN
environment variable) or service account credentialsOptional Features:
For Google Vertex AI, the Google Cloud CLI (gcloud
) is required if using service account authentication.
Your API key can be stored and retrieved from the system keyring:
For Anthropic:
secret-tool store --label="Anthropic API Key" service anthropic key api
For OpenAI:
secret-tool store --label="OpenAI API Key" service openai key api
For Google Vertex AI (store service account JSON):
secret-tool store --label="Vertex AI Service Account" service vertex key api project_id your_project_id
This will securely prompt for your API key and store it in the system keyring.
The plugin works out of the box with sensible defaults, but you can customize various aspects.
require("claudius").setup({
provider = "claude", -- AI provider: "claude", "openai", or "vertex"
model = nil, -- Uses provider defaults if nil (see below)
-- Claude default: "claude-3-7-sonnet-20250219"
-- OpenAI default: "gpt-4o"
-- Vertex default: "gemini-2.5-pro-preview-05-06"
parameters = {
max_tokens = nil, -- Set to nil to use default (4000)
temperature = nil, -- Set to nil to use default (0.7)
timeout = 120, -- Default cURL request timeout in seconds
connect_timeout = 10, -- Default cURL connection timeout in seconds
vertex = {
project_id = nil, -- Google Cloud project ID (required for Vertex AI)
location = "us-central1", -- Google Cloud region
thinking_budget = nil, -- Optional. Budget for model thinking, in tokens. `nil` or `0` disables thinking. Values `>= 1` enable thinking with the specified budget (integer part taken).
},
},
highlights = {
system = "Special", -- highlight group or hex color (e.g., "#80a0ff") for system messages
user = "Normal", -- highlight group or hex color for user messages
assistant = "Comment" -- highlight group or hex color for assistant messages
},
role_style = "bold,underline", -- style applied to role markers like @You:
ruler = {
char = "━", -- character used for the separator line
hl = "NonText" -- highlight group or hex color for the separator
},
signs = {
enabled = false, -- enable sign column highlighting for roles (disabled by default)
char = "▌", -- default vertical bar character
system = {
char = nil, -- use default char
hl = true, -- inherit from highlights.system, set false to disable, or provide specific group/hex color
},
user = {
char = "▏", -- use default char
hl = true, -- inherit from highlights.user, set false to disable, or provide specific group/hex color
},
assistant = {
char = nil, -- use default char
hl = true, -- inherit from highlights.assistant, set false to disable, or provide specific group/hex color
}
},
editing = {
disable_textwidth = true, -- Whether to disable textwidth in chat buffers
auto_write = false, -- Whether to automatically write the buffer after changes
},
pricing = {
enabled = true, -- Whether to show pricing information in notifications
},
notify = {
enabled = true, -- Enable/disable notifications
timeout = 8000, -- How long notifications stay visible (ms)
max_width = 60, -- Maximum width of notification windows
padding = 1, -- Padding around notification text
border = "rounded", -- Border style (same as nvim_open_win)
title = nil, -- Default title (nil for none)
},
text_object = "m", -- Default text object key, set to false to disable
keymaps = {
normal = {
send = "<C-]>", -- Key to send message in normal mode
cancel = "<C-c>", -- Key to cancel ongoing request
next_message = "]m", -- Jump to next message
prev_message = "[m", -- Jump to previous message
},
insert = {
send = "<C-]>" -- Key to send message in insert mode
},
enabled = true -- Set to false to disable all keymaps
}
})
[!IMPORTANT] The plugin only works with files having the .chat extension. Create or open a .chat file and the plugin will automatically set up syntax highlighting and keybindings.
Create a new empty Conversation.chat
file and add your first message:
@You: Hello Claude!
You may optionally start your conversation with a system prompt (must be the first message in the file):
@System: You are a helpful AI assistant.
Messages can be folded for better overview. Press za to toggle folds.
The plugin provides several commands for interacting with AI providers and managing chat content:
By default, the following keybindings are active in chat files:
You can disable the default keymaps by setting keymaps.enable = false
and define your own:
-- Example custom keymaps
vim.keymap.set('n', '<Leader>cs', '<cmd>ClaudiusSend<cr>')
vim.keymap.set('n', '<Leader>cc', '<cmd>ClaudiusCancel<cr>')
vim.keymap.set('i', '<C-s>', '<cmd>ClaudiusSendAndInsert<cr>')
vim.keymap.set('n', '<Leader>cn', '<cmd>ClaudiusNextMessage<cr>')
vim.keymap.set('n', '<Leader>cp', '<cmd>ClaudiusPrevMessage<cr>')
ClaudiusSend
- Send the current conversation to the configured AI providerClaudiusCancel
- Cancel an ongoing requestClaudiusSendAndInsert
- Send to AI and return to insert modeClaudiusSwitch
- Switch between providers (e.g., :ClaudiusSwitch openai gpt-4o
). If called with no arguments, it provides an interactive selection menu.ClaudiusRecallNotification
- Recall the last notification (useful for reviewing usage statistics)ClaudiusNextMessage
- Jump to next message (]m by default)ClaudiusPrevMessage
- Jump to previous message ([m by default)ClaudiusImport
- Convert a Claude Workbench API call into chat formatClaudiusEnableLogging
- Enable logging of API requests and responsesClaudiusDisableLogging
- Disable logging (default state)ClaudiusOpenLog
- Open the log file in a new tabLogging is disabled by default to prevent sensitive data from being written to disk. When troubleshooting issues:
:ClaudiusEnableLogging
:ClaudiusOpenLog
:ClaudiusDisableLogging
when doneThe log file is stored at ~/.cache/nvim/claudius.log
(or equivalent on your system) and contains:
You can switch between AI providers at any time using the :ClaudiusSwitch
command:
:ClaudiusSwitch # Interactive provider/model selection
:ClaudiusSwitch claude # Switch to Claude with default model
:ClaudiusSwitch openai gpt-4o # Switch to OpenAI with specific model
:ClaudiusSwitch vertex gemini-2.5-pro-preview-05-06 project_id=my-project # Switch to Vertex AI with project ID
:ClaudiusSwitch claude claude-3-7-sonnet-20250219 temperature=0.2 max_tokens=1000 connect_timeout=5 timeout=60 # Multiple parameters, including general ones
:ClaudiusSwitch vertex gemini-2.5-pro-preview-05-06 project_id=my-project thinking_budget=1000 # Vertex AI with thinking budget
This allows you to compare responses from different AI models without restarting Neovim.
Claudius includes a component to display the currently active AI model in your Lualine status bar. To use it, add the component to your Lualine configuration:
-- Example Lualine setup
require('lualine').setup {
options = {
-- ... your other options
},
sections = {
lualine_a = {'mode'},
-- ... other sections
lualine_x = {{ "claudius", icon = "🧠" }, 'encoding', 'filetype'}, -- Add Claudius model component with icon
-- ... other sections
},
-- ...
}
The model display is active only for .chat buffers.
Chat files support two powerful features for dynamic content:
{{expressions}}
inside messages to evaluate Lua codeStart your chat file with a Lua code block between ```lua
and ```
markers:
```lua
greeting = "Hello, World!" -- Must be global (no local keyword)
count = 42
```
@You: The greeting is: {{greeting}}
@Assistant: The greeting mentioned is: "Hello, World!"
@You: The count is: {{count}}
@Assistant: The count is: 42
Variables defined in the frontmatter are available to all expression templates in the file. Note that variables must be global (do not use the local
keyword).
Use {{expression}}
syntax inside any message to evaluate Lua code:
@You: Convert this to uppercase: {{string.upper("hello")}}
@Assistant: The text "HELLO" is already in uppercase.
@You: Calculate: {{math.floor(3.14159 * 2)}}
@Assistant: You've provided the number 6
The expression environment is restricted to safe operations focused on string manipulation, basic math, and table operations. Available functions include:
While you can define functions in the frontmatter, the focus is on simple templating rather than complex programming:
```lua
function greet(name)
return string.format("Hello, %s!", name)
end
```
@You: {{greet("Claude")}}
@Assistant: Hello! It's nice to meet you.
@file
You can embed content from local files directly into your messages using the @./path/to/file
syntax. This feature requires the file
command-line utility to be installed for MIME type detection.
Syntax:
./
(current directory) or ../
(parent directory).@./images/diagram.png
or @../documents/report.pdf
%20
) and will be automatically decoded.@./file.txt.
) is ignored.If a file is not found, not readable, or its MIME type is unsupported by the provider for direct inclusion, the raw @./path/to/file
reference will be sent as text, and a notification will be shown.
Example:
@You: OCR this image: @./screenshots/error.png and this document: @./specs/project%20brief.pdf
Provider Support:
Claude & OpenAI | Vertex AI |
---|---|
Text: Plain text files (e.g., .txt , .md , .lua ) are embedded as text |
Text files (MIME type text/* ) are embedded as text parts |
Images: JPEG, PNG, GIF, WebP | Supports generic binary files (sent as inlineData with detected MIME type) |
Documents: PDF |
You can import conversations from the Claude Workbench (console.anthropic.com):
:ClaudiusImport
to convert it to a .chat fileThe command will parse the API call and convert it into Claudius's chat format.
Claudius aims to provide a simple, native-feeling interface for having conversations with AI models directly in Neovim. Originally built for Claude (hence the name), it now supports multiple AI providers including OpenAI and Google Vertex AI. The plugin focuses on being lightweight and following Vim/Neovim conventions.
The project uses Nix for development environment management. This ensures all contributors have access to the same tools and versions.
export ANTHROPIC_API_KEY=your_key_here
Enter the development environment:
nix develop
Available development commands:
claudius-dev
: Starts an Aider session with the correct files loadedclaudius-fmt
: Reformats the codebase using:You can test changes without installing the plugin by running:
nvim --cmd "set runtimepath+=`pwd`" -c 'lua require("claudius").setup({})' -c ':edit example.chat'
This command:
This project represents a unique experiment in AI-driven development. From its inception to the present day, every single line of code has been written using Aider, demonstrating the potential of AI-assisted development in creating quality software.
While I encourage contributors to explore AI-assisted development, particularly with Aider, I welcome all forms of quality contributions. The project's development guidelines are:
claudius-fmt
before committing to maintain consistent style[!NOTE] This project started as an experiment in pure AI-driven development, and to this day, every line of code has been written exclusively through Aider. I continue to maintain this approach in my own development while welcoming contributions from all developers who share a commitment to quality.
The goal is to demonstrate how far we can push AI-assisted development while maintaining code quality. Whether you choose to work with AI or write code directly, focus on creating clear, maintainable solutions.
Keywords: claude tui, claude cli, claude terminal, claude vim, claude neovim, anthropic vim, anthropic neovim, ai vim, ai neovim, llm vim, llm neovim, chat vim, chat neovim, openai vim, openai neovim, gpt vim, gpt neovim, vertex ai vim, vertex ai neovim, gemini vim, gemini neovim