The WriteAssistAI extension for VSCode utilizes the OpenAI APIs (or OpenAI compatible proxies) to offer AI-powered writing assistance for markdown, LaTeX, quarto, typst (.typ) and plain text files. It comes with some default actions to rephrase the selected text, or perform tasks like tone change, summarize, expand etc. These actions are completely configurable through the extension's settings.
This AI text assistant provides a range of writing styles for you to select from. To access these styles, and other features, simply select the text you want to rewrite in a supported file. Then, click on the Code Actions π‘ tooltip and choose the desired action.
After a successful response from your chosen model, you'll be presented with inline actions to accept or reject the rewritten text. In case you've already changed the active editor, the response will be directly inserted into the original editor, just below the selected text.
Current feature list:
- Rewrite the text using different tones. You can choose from professional, casual, formal, friendly, informative, and authoritative tones.
- Rephrase selected text
- Suggest headlines for selected text
- Summarize selected text
- Expand selected text (make it verbose)
- Shorten selected text (make it concise)
- Accept or reject the rewritten text
- Support for markdown, LaTeX, quarto, typst and plain text files
You can modify the existing actions (including their prompt), or add new ones through the extension's settings.
To use the extension you need to provide your own OpenAI (or OpenAI-Compatible provider) API Key.
You can install the Write Assist AI extension from the VS Code Marketplace or the Open VSX Registry.
You can configure Write Assist AI using either VS Code settings or project-level config files for the supported options.
File-based config takes precedence if present.
For project-specific settings, create a .write-assist-ai/ folder at your workspace root.
Supported files:
systemPrompt.mdβ System prompt for the AI (Markdown, multiline, easy to edit)quickFixes.jsonβ Quick Fix actions (JSON array, same schema as settings)rewriteOptions.jsonβ Rewrite actions (JSON array, same schema as settings)
If these files exist, their contents will override the corresponding VS Code settings for your project.
Note
- File-based configuration is only available for the above settings.
- Not all extension settings are supported via file-based config.
- Also, file-based config does not support language-specific overridesβunlike VS Code settings, which allow you to set different values per language.
You can generate these files with default values using the following commands from the Command Palette:
- Write Assist AI: Generate System Prompt File
- Write Assist AI: Generate Quick Fixes File
- Write Assist AI: Generate Rewrite Options File
If a file already exists, youβll be prompted before overwriting.
You are a helpful assistant. Your job is to perform the tasks related to rewriting text inputs given by the user.
...
...[
{
"title": "Rephrase the selected text",
"description": "Rephrases the selected text",
"prompt": "Rephrase the given text and make the sentences more clear and readable."
}
][
{
"title": "Change to professional tone",
"description": "Changes the selected text's tone to professional",
"prompt": "Make the given text better and rewrite it in a professional tone."
}
]If no file-based config is present, or for settings that aren't supported with file-based config, the extension uses VS Code settings as described below:
writeAssistAi.maxTokens: Maximum number of tokens to use for each OpenAI API call. The default is4096.writeAssistAi.temperature: Temperature value to use for the API calls. The default is0.3.writeAssistAi.openAi.model: The OpenAI model to use. The default isgpt-5.writeAssistAi.openAi.customModel: To use a custom model, selectcustomfrom thewriteAssistAi.openAi.modeldropdown menu, and enter your model name here.writeAssistAi.openAi.proxyUrl: To use a proxy for AI calls or to connect with an OpenAI-compatible AI provider (such asOllama,Groqetc.), set this to your preferred value. If you choose a different provider, you will also need to update the API Key and specify the custom model you wish to use.writeAssistAi.openAi.reasoningEffort: Controls the amount of reasoning the model does before generating a response. Higher values may lead to more thoughtful responses but can increase latency and cost. This setting is primarily for reasoning models likegpt-5,o1etc. The default isauto. Note: You may get "reasoning not supported" errors with custom models while using a proxy. Set reasoning effort toautoto disable reasoning.writeAssistAi.systemPrompt: Sets a common system prompt to be used with LLM API calls. You can also configure language-specific system prompts using VS Code settings (e.g., @lang:markdown Write Assist AI). Note: File-based config (systemPrompt.md) does not support language-specific overrides.writeAssistAI.useAcceptRejectFlow: When enabled, the original and rewritten text are shown one below the other, with options to accept or reject the rewritten version. If disabled, the rewritten text is automatically inserted into the editor enclosed within the configuredwriteAssistAi.separatorText. By default, this is set totrue.writeAssistAI.separatorText: This option allows you to define the separator text that surrounds the output generated by the AI. By default, it is set to '*' repeated 32 times. If you prefer to remove the separators, you can do so by setting this option to an empty string.writeAssistAi.quickFixes: Sets the actions that show up in the editor's tooltip menu underQuick Fixsection. This is also configurable per language in VS Code settings, but file-based config (quickFixes.json) does not support language-specific overrides.writeAssistAi.rewriteOptions: Sets the commands that show up in the editor's tooltip menu underRewritesection. This is also configurable per language in VS Code settings, but file-based config (rewriteOptions.json) does not support language-specific overrides.
In addition, you need to set your OpenAI API Key (or the OpenAI-Compatible provider's API Key) in the Command Palette under Write Assist AI category. If not configured already, you can also set it when you use the extension for the first time. Your key will be securely stored in VSCode's secretStorage for safety.
To utilize other OpenAI-compatible providers (such as Ollama, Groq etc.), follow these steps:
- Configure the correct OpenAI compatible base URL by adjusting the
writeAssistAi.openAi.proxyUrlsetting. - Enter the API Key for your chosen provider using the command palette.
- Change the
writeAssistAi.openAi.modelsetting to "custom" and specify the desired model name in thewriteAssistAi.openAi.customModelsetting.
Example configuration for using Ollama:
{
"writeAssistAi.openAi.proxyUrl": "http://localhost:11434/v1",
"writeAssistAi.openAi.model": "custom",
"writeAssistAi.openAi.customModel": "llama3.2"
}The API Key for Ollama can be any text, say ollama itself.
Once you've completed these steps, you'll be ready to use the alternative provider.
You can now define actions either in your workspace config files (explained above) or in VS Code settings. Both writeAssistAi.quickFixes and writeAssistAi.rewriteOptions use the same JSON Schema to define actions. You can edit or remove existing actions, or create a new one by adding an action object.
For instance, you can include a new Quick Fix action in your settings.json file to translate the selected text to French.
"writeAssistAi.quickFixes": [
// ...
{
"title": "Translate into French",
"description": "Translates the selected text into French",
"prompt": "Translate the given text into French."
},
// ...
]To specify actions for a specific language, place the actions within the corresponding language configuration block. For example, to ensure that the action "Translate into French" only applies to markdown files, you can do the following in your settings.json:
{
"[markdown]": {
// other settings
"writeAssistAi.quickFixes": [
// ...
{
"title": "Translate into French",
"description": "Translates the selected text into French",
"prompt": "Translate the given text into French."
},
// ...
]
}
}Note
Default actions are activated only when no action has been specified for a supported language. If you have defined specific actions for a particular language, only those actions will be visible for that language.
If the AI's response is truncated, it may be because the request hit the max_tokens limit. The extension will detect this and show a notification with an option to "Retry without limit". Clicking this will resend the request without the token constraint, which usually resolves the issue.
Alternatively, you can manually increase the writeAssistAi.maxTokens value in your settings for future requests.
Some models do not support reasoning capabilities. If you encounter a "reasoning not supported" error, try setting the writeAssistAi.openAi.reasoningEffort setting to auto to disable reasoning for that model.
--
- Support for
typst (.typ)files. gpt-5.1model to the selection dropdown, and made it the default model.
- README updated for troubleshooting the "reasoning not supported" errors with custom models with a proxy.
- Fixed an issue where the API progress indicator would not auto dismiss on API errors.
- Fixed an issue where the dynamic API-KEY change would not reflect in the subsequent requests without a reload.
- New setting
writeAssistAi.openAi.reasoningEffortto control model reasoning for newer models.
- Improved handling of truncated API responses with a new retry mechanism.
- Fixed an issue where Code Actions would not reappear after cancelling a request.
- Support for file-based configuration (#29):
.write-assist-ai/systemPrompt.mdβ system prompt text.write-assist-ai/quickFixes.jsonβ quick fix actions.write-assist-ai/rewriteOptions.jsonβ rewrite actions- Commands to generate these files with default values
- File-based configuration now takes precedence over VS Code settings
- Compatibility issue where newer models do not support
max_tokens/temperaturesettings (#30)
- Improved error handling for the custom model setting (#27)
- Updated OpenAI model list and set default to
gpt-5 - Default
max_tokensincreased to 4096 - README updated with usage instructions for file-based configuration
- Option to enable/disable the inline accept/reject flow for AI suggestions (#28)
- Added explicit support for
mdxfiles (#25)
- Updated README with Ollama setup instructions (#26)
- Support for inline accept/reject of the AI suggestions with git diff like interface (#23)
- If active editor is changed while waiting for the AI response, the rephrased text is directly inserted into the correct editor
- Fixed the issue of inserting the rephrased text into the wrong editor if active editor is changed while waiting for the AI response (#24)
- New demo gif for the extension showing the inline accept/reject feature
- Updated the README with the new feature
- Added explicit support for Quarto files (The official Quarto extension registers a new languageId, so this extension stopped working with it).
- Moved from Webpack to esbuild for building the extension
- Option to set a
proxyURL(baseURL) to the OpenAI calls gpt-4o-minimodel to the selection dropdown- Option to set/remove the separator text around AI response
CodeActionsstopped showing if user didn't enter the API Key
To check the complete changelog click here
This extension is licensed under the MIT License
