Continue: What's New in March 2026
A summary of all updates, new features, and bug fixes released for Continue during March 2026.
New Features in March 2026
- Enabled filtering of session history based on the current workspace directory.(v1.2.22-vscode)
- Added support for configuration files located in the .continue/configs directory.(v1.2.22-vscode)
- Enabled filtering of session history based on the current workspace directory.(v1.3.38-vscode)
- Added support for configuration files located in the .continue/configs directory.(v1.3.38-vscode)
- Enabled filtering of session history based on the current workspace directory.(v1.0.67-jetbrains)
- Added support for configuration files located in the .continue/configs directory.(v1.0.67-jetbrains)
- Enabled ClawRouter as a new provider option for cost-optimized model routing.(v1.3.36-vscode)
- Added `useResponsesApi` option to allow users to opt out of the Responses API.(v1.2.19-vscode)
- Enabled OpenRouter to send HTTP-Referer and X-Title headers to identify the application when making requests.(v1.2.19-vscode)
- Added the `useResponsesApi` option to allow users to opt out of the Responses API.(v1.0.64-jetbrains)
- Enabled OpenRouter to send HTTP-Referer and X-Title headers to help identify the application.(v1.0.64-jetbrains)
- Added support for Gemini 3 tools, suffix stripping, thought signature, and the autocomplete endpoint for the OpenRouter provider.(v1.0.64-jetbrains)
- Added `useResponsesApi` option to allow users to opt out of the Responses API.(v1.3.35-vscode)
- Enabled OpenRouter to send HTTP-Referer and X-Title headers to identify the application when making requests.(v1.3.35-vscode)
- Added Tensorix as a new supported LLM provider.(v1.2.18-vscode)
- Introduced MiniMax as a new LLM provider, defaulting to M2.7.(v1.2.18-vscode)
- Enabled support for Gemini models through the AI SDK.(v1.2.18-vscode)
- Added Bedrock API key authentication support.(v1.2.18-vscode)
- Enabled command-line interface (CLI) to discover and use .continue/checks/ in code review contexts.(v1.2.18-vscode)
- Enabled CLI users to invoke skills directly and import any skill.(v1.2.18-vscode)
- Enabled CLI users to export and import chat sessions.(v1.2.18-vscode)
- Added Qwen multi-file FIM template for improved repository-level autocompletion.(v1.2.18-vscode)
- Added Tensorix as a new LLM provider.(v1.0.63-jetbrains)
- Enabled discovery of .continue/checks/ in continuous integration (CN) review mode via the CLI.(v1.0.63-jetbrains)
- Introduced invokable skills and the ability to import any skill via the CLI.(v1.0.63-jetbrains)
- Enabled export and import functionality for chat sessions via the CLI.(v1.0.63-jetbrains)
- Added support for Qwen multi-file FIM template for repository-level autocompletion.(v1.0.63-jetbrains)
- Enabled support for Gemini models through the AI SDK.(v1.0.63-jetbrains)
- Added support for Bedrock API key authentication.(v1.0.63-jetbrains)
- Added Tensorix as a new LLM provider.(v1.3.34-vscode)
- Enabled discovery of .continue/checks/ in continuous integration (CN) review mode via the CLI.(v1.3.34-vscode)
- Introduced invokable skills and the ability to import any skill via the CLI.(v1.3.34-vscode)
- Enabled export and import functionality for chat sessions via the CLI.(v1.3.34-vscode)
- Added support for Qwen multi-file FIM template for repository-level autocompletion.(v1.3.34-vscode)
- Enabled support for Gemini models through the AI SDK.(v1.3.34-vscode)
- Added support for Bedrock API key authentication.(v1.3.34-vscode)
- Enabled background execution for bash tool commands.(v1.2.17-vscode)
- Introduced support for Claude Sonnet and Opus 4.6 models.(v1.2.17-vscode)
- Integrated the ai-sdk provider for model connectivity.(v1.2.17-vscode)
- Added support for z.ai models.(v1.2.17-vscode)
- Enabled pre-install suggestions within devbox blueprints via CLI.(v1.2.17-vscode)
- Added turn-level prompt caching for improved performance.(v1.2.17-vscode)
- Enabled ai-sdk support for xai and deepseek models.(v1.2.17-vscode)
- Added quiz answers functionality to the CLI.(v1.2.17-vscode)
- Allowed background job permission checks by default.(v1.2.17-vscode)
- Introduced a hooks system for CLI event interception.(v1.2.17-vscode)
- Added 5 new agent checks derived from codebase history.(v1.2.17-vscode)
- Added an alias for /title, allowing usage of /rename.(v1.2.17-vscode)
- Enabled background execution for bash tool commands.(v1.0.62-jetbrains)
- Introduced support for Claude Sonnet and Opus 4.6 models.(v1.0.62-jetbrains)
- Integrated the ai-sdk provider for model connectivity.(v1.0.62-jetbrains)
- Added support for z.ai models.(v1.0.62-jetbrains)
- Enabled pre-install suggestions within devbox blueprints via CLI.(v1.0.62-jetbrains)
- Added turn-level prompt caching for improved performance.(v1.0.62-jetbrains)
- Enabled ai-sdk support for xai and deepseek models.(v1.0.62-jetbrains)
- Added quiz answers functionality to the CLI.(v1.0.62-jetbrains)
- Allowed background job permission checks by default.(v1.0.62-jetbrains)
- Introduced a hooks system for CLI event interception.(v1.0.62-jetbrains)
- Added 5 new agent checks derived from codebase history.(v1.0.62-jetbrains)
- Added an alias for /title, allowing usage of /rename.(v1.0.62-jetbrains)
- Enabled background execution for bash tool commands.(v1.3.33-vscode)
- Introduced turn-level prompt caching for improved response times.(v1.3.33-vscode)
- Integrated support for Claude Sonnet and Opus 4.6 models.(v1.3.33-vscode)
- Added support for z.ai models.(v1.3.33-vscode)
- Enabled integration of the ai-sdk provider.(v1.3.33-vscode)
- Added pre-install suggestions via CLI in devbox blueprints.(v1.3.33-vscode)
- Enabled AI SDK usage via an environment variable.(v1.3.33-vscode)
- Added ai-sdk support for xai and deepseek models.(v1.3.33-vscode)
- Introduced quiz answers functionality in the CLI.(v1.3.33-vscode)
- Enabled checking for background job permissions by default.(v1.3.33-vscode)
- Added a hooks system for CLI event interception.(v1.3.33-vscode)
- Added 5 new agent checks derived from codebase history.(v1.3.33-vscode)
- Added an alias for /title, allowing use of /rename.(v1.3.33-vscode)
- Documented the check CLI for running checks locally.(v1.3.33-vscode)
Bug Fixes in March 2026
- Fixed an issue where the application would crash or fail if config.yaml was missing or empty upon access.(v1.2.22-vscode)
- Fixed an issue where the application would crash or fail if config.yaml was missing or empty upon access.(v1.3.38-vscode)
- Fixed an issue where the Winston logger was incorrectly redirected to stdout, preventing IPC stream corruption.(v1.0.67-jetbrains)
- Ensured that config.yaml exists and is properly populated when accessed, preventing potential errors.(v1.0.67-jetbrains)
- Removed the gate that restricted template-based tool support when using Ollama.(v1.2.21-vscode)
- Removed the gate that restricted template-based tool support when using Ollama.(v1.3.37-vscode)
- Removed the gate that restricted template-based tool support when using Ollama.(v1.0.66-jetbrains)
- Fixed StringIndexOutOfBoundsException when reading ranges in files.(v1.2.20-vscode)
- Resolved critical and high security vulnerabilities.(v1.2.20-vscode)
- Fixed mapping of max_tokens to max_completion_tokens for the DeepSeek reasoner in the OpenAI adapter.(v1.2.20-vscode)
- Prevented IDE freezes in JetBrains by hardening remote configuration synchronization.(v1.2.20-vscode)
- Resolved crashes in DiffStreamHandler caused by unsafe casts and negative line numbers.(v1.2.20-vscode)
- Added a guard for disposed JCEF browsers in sendToWebview to prevent errors.(v1.2.20-vscode)
- Fixed synchronization of message type pass-through lists between the core and JetBrains components.(v1.2.20-vscode)
- Resolved SideEffectNotAllowedException occurring in intention previews.(v1.2.20-vscode)
- Prevented a responseListeners memory leak within CoreMessenger.(v1.2.20-vscode)
- Fixed JetBrains sidebar freezes by chunking large JCEF messages.(v1.2.20-vscode)
- Handled square brackets in file paths for autocomplete functionality within IntelliJ.(v1.2.20-vscode)
- Fixed an issue where the DeepSeek reasoner adapter was not correctly mapping max_tokens to max_completion_tokens.(v1.0.65-jetbrains)
- Hardened JetBrains remote configuration synchronization to prevent IDE freezes.(v1.0.65-jetbrains)
- Resolved crashes in DiffStreamHandler caused by unsafe casts and negative line numbers.(v1.0.65-jetbrains)
- Added a guard for disposed JCEF browsers in the sendToWebview function to prevent errors.(v1.0.65-jetbrains)
- Fixed an issue where message type pass-through lists were not syncing correctly between the core and JetBrains components.(v1.0.65-jetbrains)
- Resolved SideEffectNotAllowedException errors that occurred during intention previews.(v1.0.65-jetbrains)
- Prevented a responseListeners memory leak within the CoreMessenger.(v1.0.65-jetbrains)
- Fixed JetBrains sidebar freezes by chunking large JCEF messages.(v1.0.65-jetbrains)
- Handled square brackets in file paths for autocomplete functionality within IntelliJ.(v1.0.65-jetbrains)
- Fixed crashes and exceptions occurring in the DiffStreamHandler related to unsafe casts and negative line numbers.(v1.3.36-vscode)
- Resolved crashes in intention previews caused by SideEffectNotAllowedException.(v1.3.36-vscode)
- Prevented potential memory leaks in CoreMessenger by fixing responseListeners.(v1.3.36-vscode)
- Fixed an issue where large JCEF messages caused freezes in the JetBrains sidebar by implementing message chunking.(v1.3.36-vscode)
- Fixed StringIndexOutOfBoundsException when reading ranges in files.(v1.3.36-vscode)
- Resolved critical and high security vulnerabilities.(v1.3.36-vscode)
- Fixed an issue where square brackets in file paths caused problems with autocomplete in IntelliJ.(v1.3.36-vscode)
- Fixed an issue where a guard was missing for disposed JCEF browsers in sendToWebview calls.(v1.3.36-vscode)
- Removed Llama 3.1 405B from the Groq provider.(v1.2.19-vscode)
- Fixed an issue where Gemini merged consecutive same-role messages, causing ordering errors.(v1.2.19-vscode)
- Fixed an issue where tool arguments (MCP tool args) were not being coerced to match schema string types.(v1.2.19-vscode)
- Fixed mapping of `reasoning-delta` to `reasoning_content` instead of `content` for certain models.(v1.2.19-vscode)
- Fixed an issue preventing multiple context providers of the same type from being configured in `config.yaml`.(v1.2.19-vscode)
- Stopped CLI free-trial polling for models that are not in a free-trial state.(v1.2.19-vscode)
- Removed inline backtick fences from tool instruction prose.(v1.2.19-vscode)
- Fixed handling of multiple zip files during the JetBrains release artifact creation step.(v1.2.19-vscode)
- Fixed hiding the thinking indicator when the thinking content is empty.(v1.2.19-vscode)
- Fixed OpenRouter support for Gemini 3, including suffix stripping, `thought_signature`, and the autocomplete endpoint.(v1.2.19-vscode)
- Fixed listener leaks and redundant file reads occurring during autocomplete operations.(v1.2.19-vscode)
- Fixed preserving tool calls when thinking models return no text content.(v1.2.19-vscode)
- Fixed allowing users to correct an API key after entering an invalid one for xAI/Gemini providers.(v1.2.19-vscode)
- Fixed showing an actionable error when Ollama fails to parse tool calls.(v1.2.19-vscode)
- Fixed ensuring the vLLM provider respects the user-configured `contextLength` and model settings.(v1.2.19-vscode)
- Removed the Llama 3.1 405B model from the Groq provider.(v1.0.64-jetbrains)
- Fixed an issue where the Gemini provider merged consecutive same-role messages, causing ordering errors.(v1.0.64-jetbrains)
- Fixed an issue where tool argument values (MCP tool args) were not being coerced to match the required string types in the schema.(v1.0.64-jetbrains)
- Fixed mapping for Moonshot models to use the `reasoning_content` field instead of `content`. (v1.0.64-jetbrains)
- Fixed an issue where the thinking model returned no text content, causing tool calls to be lost.(v1.0.64-jetbrains)
- Fixed an issue where the CLI would stop polling the free-trial status for models that are not in a free-trial period.(v1.0.64-jetbrains)
- Removed inline backtick fences from tool instruction prose to clean up output.(v1.0.64-jetbrains)
- Fixed an issue where the thinking indicator was incorrectly shown when there was no thinking content.(v1.0.64-jetbrains)
- Fixed listener leaks and redundant file reads occurring during autocomplete operations.(v1.0.64-jetbrains)
- Allowed users to correct an API key after entering an invalid one for xAI/Gemini providers.(v1.0.64-jetbrains)
- Showed an actionable error message when Ollama fails to parse tool calls.(v1.0.64-jetbrains)
- Fixed an issue where the vLLM provider did not respect the user-configured `contextLength` and model settings.(v1.0.64-jetbrains)
- Removed Llama 3.1 405B from the Groq provider.(v1.3.35-vscode)
- Fixed an issue where Gemini models merged consecutive same-role messages, causing ordering errors.(v1.3.35-vscode)
- Fixed an issue where tool arguments (MCP tool args) were not being coerced to match schema string types.(v1.3.35-vscode)
- Fixed mapping of `reasoning-delta` to `reasoning_content` instead of `content` for certain models.(v1.3.35-vscode)
- Fixed an issue preventing multiple context providers of the same type from being configured in `config.yaml`.(v1.3.35-vscode)
- Stopped CLI free-trial polling for models that are not in a free-trial state.(v1.3.35-vscode)
- Removed inline backtick fences from tool instruction prose.(v1.3.35-vscode)
- Fixed handling of multiple zip files during the JetBrains release artifact creation step.(v1.3.35-vscode)
- Fixed hiding the thinking indicator when the thinking content is empty.(v1.3.35-vscode)
- Fixed OpenRouter support for Gemini 3, including suffix stripping, `thought_signature`, and the autocomplete endpoint.(v1.3.35-vscode)
- Fixed listener leaks and redundant file reads occurring during autocomplete operations.(v1.3.35-vscode)
- Preserved tool calls when thinking models return no text content.(v1.3.35-vscode)
- Fixed an issue where users could not correct an API key after entering an invalid one for xAI/Gemini.(v1.3.35-vscode)
- Added actionable error reporting when Ollama fails to parse tool calls.(v1.3.35-vscode)
- Ensured the vLLM provider respects the user-configured `contextLength` and model settings.(v1.3.35-vscode)
- Fixed terminal links opening incorrect URLs when addresses contained ports.(v1.2.18-vscode)
- Fixed removal of hardcoded Unix $ prompt prefix from the terminal UI.(v1.2.18-vscode)
- Fixed loading of rules into the system message context.(v1.2.18-vscode)
- Strengthened the default Apply prompt used for local models.(v1.2.18-vscode)
- Fixed skipping of remote URIs when resolving the Multi-Cloud Platform (MCP) server current working directory (cwd).(v1.2.18-vscode)
- Fixed inclusion of the reasoning_content field for DeepSeek Reasoner models.(v1.2.18-vscode)
- Fixed incorrect location used for the CLI configuration file in the registry client.(v1.2.18-vscode)
- Fixed system-message tools parser when a tool call was non-terminal.(v1.2.18-vscode)
- Fixed Ollama MCP tool calling specifically for Mistral and Gemma3 models.(v1.2.18-vscode)
- Fixed 'No chat model selected' error appearing on startup.(v1.2.18-vscode)
- Fixed preservation of indentation when applying code edits to Python files.(v1.2.18-vscode)
- Fixed use of the config.yaml name for the default Local Config profile.(v1.2.18-vscode)
- Fixed OpenAI Responses API 400 errors related to reasoning, tool calls, and ID handling.(v1.2.18-vscode)
- Fixed updates to the Gemini model catalog to reflect retired and new models.(v1.2.18-vscode)
- Fixed tool support expansion to reduce 'Invalid tool name' errors.(v1.2.18-vscode)
- Fixed handling of thinking/assistant messages appearing at the end of the chat history.(v1.2.18-vscode)
- Restricted terminal childProcess.spawn to local-only environments for security.(v1.2.18-vscode)
- Fixed terminal links opening incorrect URLs when addresses contained ports.(v1.0.63-jetbrains)
- Fixed removal of the hardcoded Unix $ prompt prefix from the terminal UI.(v1.0.63-jetbrains)
- Fixed loading of rules into the system message.(v1.0.63-jetbrains)
- Strengthened the default Apply prompt used for local models.(v1.0.63-jetbrains)
- Fixed skipping remote URIs when resolving the Multi-Cloud Platform (MCP) server current working directory (cwd).(v1.0.63-jetbrains)
- Fixed inclusion of the reasoning_content field for DeepSeek Reasoner models.(v1.0.63-jetbrains)
- Added a default timeout for terminal command tool execution.(v1.0.63-jetbrains)
- Fixed incorrect location resolution for the CLI configuration file in the registry client.(v1.0.63-jetbrains)
- Fixed system-message tools parser when a tool call was non-terminal.(v1.0.63-jetbrains)
- Fixed Ollama MCP tool calling for Mistral and Gemma3 models.(v1.0.63-jetbrains)
- Fixed OpenAI API 400 errors related to reasoning, tool calls, and ID handling.(v1.0.63-jetbrains)
- Fixed update of the model catalog for retired and new Gemini models.(v1.0.63-jetbrains)
- Fixed expansion of tool support to reduce 'Invalid tool name' errors.(v1.0.63-jetbrains)
- Fixed handling of thinking/assistant messages appearing at the end of chat history.(v1.0.63-jetbrains)
- Guarded against non-string values for the NO_PROXY environment variable.(v1.0.63-jetbrains)
- Fixed terminal links opening incorrect URLs when addresses contained ports.(v1.3.34-vscode)
- Fixed removal of the hardcoded Unix $ prompt prefix from the terminal UI.(v1.3.34-vscode)
- Fixed loading of rules into the system message.(v1.3.34-vscode)
- Strengthened the default Apply prompt used for local models.(v1.3.34-vscode)
- Fixed skipping remote URIs when resolving the Multi-Cloud Platform (MCP) server current working directory (cwd).(v1.3.34-vscode)
- Fixed inclusion of the reasoning_content field for DeepSeek Reasoner models.(v1.3.34-vscode)
- Added a default timeout for terminal command tool execution.(v1.3.34-vscode)
- Fixed incorrect location resolution for the CLI configuration file in the registry client.(v1.3.34-vscode)
- Fixed system-message tools parser when a tool call was non-terminal.(v1.3.34-vscode)
- Fixed Ollama MCP tool calling behavior for Mistral and Gemma3 models.(v1.3.34-vscode)
- Fixed OpenAI Responses API 400 errors related to reasoning, tool calls, and ID handling.(v1.3.34-vscode)
- Fixed update of the model catalog for retired and new Gemini models.(v1.3.34-vscode)
- Fixed expansion of tool support to reduce 'Invalid tool name' errors.(v1.3.34-vscode)
- Fixed handling of thinking/assistant messages appearing at the end of chat history.(v1.3.34-vscode)
- Fixed guarding against a non-string value for the NO_PROXY environment variable.(v1.3.34-vscode)
- Fixed restriction of terminal childProcess.spawn to local-only environments.(v1.3.34-vscode)
- Fixed preserving indentation when applying code edits to Python files.(v1.3.34-vscode)
- Fixed using the config.yaml name for the default Local Config profile.(v1.3.34-vscode)
- Fixed an error where the system message tools parser was hardened and toolOverrides were wired to the system message path.(v1.3.34-vscode)
- Fixed a "No chat model selected" error occurring on startup.(v1.3.34-vscode)
- Fixed Ollama tool calling for specific models.(v1.3.34-vscode)
- Fixed detection of embedded 429 (Too Many Requests) errors from Gemini/VertexAI APIs.(v1.2.17-vscode)
- Fixed calculation of cache hit rate and corrected Vercel stream tool call handling.(v1.2.17-vscode)
- Fixed security vulnerability (CWE-22) by adding a security check to the createNewFile tool.(v1.2.17-vscode)
- Fixed duplicate predefined tools appearing in the dynamic tools system message.(v1.2.17-vscode)
- Fixed configuration not refreshing when creating a new assistant file.(v1.2.17-vscode)
- Fixed handling of spaces in filepaths for JetBrains autocomplete functionality.(v1.2.17-vscode)
- Fixed preservation of leading dot in ls tool path resolution.(v1.2.17-vscode)
- Fixed CLI behavior to show the create new assistant option when logged in.(v1.2.17-vscode)
- Fixed displaying the diff line when instantly applying changes in JetBrains extensions.(v1.2.17-vscode)
- Fixed detection of embedded 429 (Too Many Requests) errors from Gemini/VertexAI APIs.(v1.0.62-jetbrains)
- Fixed calculation of cache hit rate and corrected Vercel stream tool call handling.(v1.0.62-jetbrains)
- Fixed security vulnerability (CWE-22) by adding a security check to the createNewFile tool.(v1.0.62-jetbrains)
- Fixed duplicate predefined tools appearing in the dynamic tools system message.(v1.0.62-jetbrains)
- Fixed configuration not refreshing when creating a new assistant file.(v1.0.62-jetbrains)
- Fixed handling of spaces in filepaths for JetBrains autocomplete functionality.(v1.0.62-jetbrains)
- Fixed resolution of file paths starting with a dot in the ls tool.(v1.0.62-jetbrains)
- Fixed displaying the diff line when instantly applying changes in JetBrains extensions.(v1.0.62-jetbrains)
- Fixed showing the 'create new assistant' option in the CLI when the user is logged in.(v1.0.62-jetbrains)
- Resolved high severity minimatch vulnerabilities through dependency upgrades.(v1.0.62-jetbrains)
- Fixed npm OIDC publishing issues by restoring registry-url, clearing tokens, and updating Node versions.(v1.0.62-jetbrains)
- Fixed detection of embedded 429 (Too Many Requests) errors from Gemini/VertexAI APIs.(v1.3.33-vscode)
- Fixed calculation of cache hit rate and corrected Vercel stream tool call handling.(v1.3.33-vscode)
- Fixed security vulnerability (CWE-22) by adding a security check to the createNewFile tool.(v1.3.33-vscode)
- Fixed duplicate predefined tools appearing in the dynamic tools system message.(v1.3.33-vscode)
- Fixed configuration not refreshing when creating a new assistant file.(v1.3.33-vscode)
- Fixed handling of spaces in filepaths for JetBrains autocomplete functionality.(v1.3.33-vscode)
- Fixed resolution of file paths starting with a dot in the ls tool.(v1.3.33-vscode)
- Fixed displaying the diff line when instantly applying changes in JetBrains extensions.(v1.3.33-vscode)
- Fixed showing the create new assistant option in the CLI when the user is logged in.(v1.3.33-vscode)
- Resolved high severity minimatch vulnerabilities through dependency upgrades.(v1.3.33-vscode)
- Fixed npm OIDC publishing issues by restoring registry-url, clearing tokens, and using Node 22/24.(v1.3.33-vscode)
Improvements in March 2026
- Updated numerous internal dependencies to newer versions for improved stability and security.(v1.2.22-vscode)
- Updated numerous internal dependencies to newer versions for improved stability and security.(v1.3.38-vscode)
- Updated dependencies for JetBrains and VS Code integrations (versions 65 and 36 respectively).(v1.2.20-vscode)
- Updated expected files for JetBrains binary tests.(v1.2.20-vscode)
- Updated to VS Code extension version 1.3.36.(v1.0.65-jetbrains)
- Updated to JetBrains extension version 65.(v1.0.65-jetbrains)
- Hardened JetBrains remote configuration synchronization to prevent IDE freezes.(v1.3.36-vscode)
- Fixed mapping of max_tokens to max_completion_tokens specifically for the DeepSeek reasoner in the OpenAI adapter.(v1.3.36-vscode)
- Ensured correct pass-through of message types between core and JetBrains components.(v1.3.36-vscode)
- Added support for the `reasoning_content` field for Kimi models in the Moonshot provider.(v1.2.19-vscode)
- Ensured the `contextLength` specified in YAML model configuration is respected.(v1.2.19-vscode)
- Lazy-loaded the Ollama /api/show endpoint to reduce unnecessary initial requests.(v1.2.19-vscode)
- Ensured installation steps are not skipped by default and the lock file is synchronized.(v1.2.19-vscode)
- Added documentation clarifying where secrets can be templated from.(v1.2.19-vscode)
- Added troubleshooting documentation specifically for Ollama memory errors.(v1.2.19-vscode)
- Added `keepAlive` configuration support to YAML completion options schema.(v1.0.64-jetbrains)
- Ensured the model name is included in the completion request body for llama.cpp.(v1.0.64-jetbrains)
- Ensured the `contextLength` specified in the YAML model configuration is respected across providers.(v1.0.64-jetbrains)
- Lazy-loaded the Ollama /api/show endpoint to reduce unnecessary initial requests.(v1.0.64-jetbrains)
- Ensured installation steps are not skipped by default and the lock file is synchronized.(v1.0.64-jetbrains)
- Allowed multiple context providers of the same type to be configured in `config.yaml`.(v1.0.64-jetbrains)
- Removed Llama 3.1 405B from the Groq provider.(v1.0.64-jetbrains)
- Fixed an issue where inline backtick fences were present in tool instruction prose.(v1.0.64-jetbrains)
- Handled multiple zip files correctly during the JetBrains release artifact creation step.(v1.0.64-jetbrains)
- Added `keepAlive` configuration support to YAML completion options schema.(v1.3.35-vscode)
- Included the model name in the completion request body for `llama.cpp` providers.(v1.3.35-vscode)
- Added support for the `reasoning_content` field for Kimi models in the Moonshot provider.(v1.3.35-vscode)
- Ensured the context length specified in the YAML model configuration is respected.(v1.3.35-vscode)
- Lazy-loaded the Ollama `/api/show` endpoint to reduce unnecessary initial requests.(v1.3.35-vscode)
- Ensured installation steps are not skipped by default and the lock file is synchronized.(v1.3.35-vscode)
- Added documentation clarifying where secrets can be templated from.(v1.3.35-vscode)
- Added troubleshooting documentation specifically for Ollama memory errors.(v1.3.35-vscode)
- Hardened system message tools and wired toolOverrides to the system message path.(v1.2.18-vscode)
- Improved SSL certificate troubleshooting guidance documentation.(v1.2.18-vscode)
- Documented all CLI slash commands available in TUI mode.(v1.2.18-vscode)
- Updated default LLM configurations: removed Gemini 2.0 Flash and updated Claude defaults to 4.6.(v1.2.18-vscode)
- Improved error handling UX and moved stream retry logic to the BaseLLM class.(v1.2.18-vscode)
- Updated Node.js LTS version to v20.20.1.(v1.2.18-vscode)
- Fixed stale Models documentation links displayed in the configuration panel.(v1.2.18-vscode)
- Ensured Azure hosted Anthropic service sends x-api-key instead of api-key as a header.(v1.2.18-vscode)
- Added a default timeout for terminal command tool execution.(v1.2.18-vscode)
- Hardened system message tools and wired toolOverrides to the system message path.(v1.0.63-jetbrains)
- Improved SSL certificate troubleshooting guidance documentation.(v1.0.63-jetbrains)
- Documented all CLI slash commands available in TUI mode.(v1.0.63-jetbrains)
- Updated default LLM configurations: removed Gemini 2.0 Flash and updated Claude defaults to 4.6.(v1.0.63-jetbrains)
- Sent x-api-key instead of api-key as the header for Azure hosted Anthropic service.(v1.0.63-jetbrains)
- Updated JetBrains and VS Code versions (63 and 34 respectively).(v1.0.63-jetbrains)
- Improved error handling UX and moved stream retry logic to BaseLLM.(v1.0.63-jetbrains)
- Preserved indentation when applying code edits to Python files.(v1.0.63-jetbrains)
- Used the config.yaml name for the default Local Config profile.(v1.0.63-jetbrains)
- Restricted terminal childProcess.spawn to local-only environments.(v1.0.63-jetbrains)
- Improved SSL certificate troubleshooting guidance documentation.(v1.3.34-vscode)
- Improved documentation coverage for AskQuestion tool behavior and TUI interaction.(v1.3.34-vscode)
- Improved documentation for all CLI slash commands when in TUI mode.(v1.3.34-vscode)
- Improved error handling user experience (UX) and moved stream retry logic to BaseLLM.(v1.3.34-vscode)
- Updated default LLM configurations: removed Gemini 2.0 Flash and updated Claude defaults to 4.6.(v1.3.34-vscode)
- Updated JetBrains and VS Code compatibility versions.(v1.3.34-vscode)
- Updated Node.js LTS to v20.20.1.(v1.3.34-vscode)
- Fixed stale Models documentation links in the configuration panel.(v1.3.34-vscode)
- Upgraded AI SDK to v6.(v1.2.17-vscode)
- Added PostHog metrics tracking and live API tests.(v1.2.17-vscode)
- Added conversation message caching to AnthropicApi.(v1.2.17-vscode)
- Added cache token data to PostHog telemetry events.(v1.2.17-vscode)
- Removed directory structure information from the system prompt.(v1.2.17-vscode)
- Resolved high severity minimatch vulnerabilities through dependency upgrades.(v1.2.17-vscode)
- Upgraded dependencies including tar, express-rate-limit, and various minimatch versions across packages.(v1.2.17-vscode)
- Upgraded security-related dependencies like @electron/rebuild and @openapitools/openapi-generator-cli.(v1.2.17-vscode)
- Simplified the quickstart guide documentation to point to continue.dev/check.(v1.0.62-jetbrains)
- Added PostHog metrics and live API tests alongside prompt caching.(v1.0.62-jetbrains)
- Added conversation message caching specifically for AnthropicApi.(v1.0.62-jetbrains)
- Added cache token data to PostHog telemetry events.(v1.0.62-jetbrains)
- Removed directory structure information from the system prompt to potentially improve context usage.(v1.0.62-jetbrains)
- Documented the check CLI for running checks locally.(v1.0.62-jetbrains)
- Updated documentation URLs for extensions readmes.(v1.0.62-jetbrains)
- Simplified the quickstart guide documentation to point to continue.dev/check.(v1.3.33-vscode)
- Added PostHog metrics and live API tests alongside prompt caching.(v1.3.33-vscode)
- Added conversation message caching specifically for AnthropicApi.(v1.3.33-vscode)
- Added cache token data to PostHog telemetry events.(v1.3.33-vscode)
- Removed directory structure information from the system prompt.(v1.3.33-vscode)
- Removed the beta release workflow.(v1.3.33-vscode)
- Updated documentation URLs for extensions.(v1.3.33-vscode)
All Releases in March 2026
v1.2.22-vscode2 features1 fixThis release introduces new ways to manage your history and configurations, allowing you to filter session history by workspace directory. A critical bug was also fixed ensuring the application handles missing or empty config.yaml files gracefully. Additionally, several internal dependencies have been updated.
v1.3.38-vscode2 features1 fixThis release introduces new ways to manage your history and configurations, allowing you to filter session history by workspace directory. A critical bug was also fixed ensuring the application handles missing or empty config.yaml files gracefully. Several internal dependency updates were also performed.
v1.0.67-jetbrains2 features2 fixesThis release introduces new ways to manage your history and configurations, allowing you to filter session history by workspace and utilize a dedicated .continue/configs directory. Several critical bugs were also fixed, including preventing IPC stream corruption and ensuring configuration files are correctly initialized.
v1.2.21-vscode1 fixThis release focuses on removing a restriction related to Ollama tool support, specifically removing the gate that previously limited template-based tool usage with Ollama. No new user-facing features were introduced in this update.
v1.3.37-vscode1 fixThis release focuses on maintenance and a specific fix related to tool usage. The primary change is the removal of a restriction that previously gated template-based tool support when integrating with Ollama.
v1.0.66-jetbrains1 fixThis release focuses on removing a restriction related to tool support when using Ollama, ensuring template-based tools function correctly. No new user-facing features were introduced in this update.
mainv1.2.20-vscode11 fixesThis release focuses heavily on stability and security, resolving critical vulnerabilities and numerous crashes across different environments. Key fixes include preventing IDE freezes in JetBrains caused by configuration sync and large messages, alongside resolving exceptions in diff handling and intention previews.
v1.0.65-jetbrains9 fixesThis release focuses heavily on stability and performance within the JetBrains IDEs, addressing several critical bugs that caused freezes and crashes. Key fixes include preventing sidebar freezes from large messages and resolving exceptions during intention previews. Users should experience a more robust and reliable experience, especially when working with remote configurations.
v1.3.36-vscode1 feature8 fixesThis release introduces ClawRouter as a new option for cost-optimized model routing, providing users with more flexibility in managing model costs. Numerous stability fixes were implemented, including resolving crashes in diff handling and intention previews, and preventing IDE freezes caused by large messages in JetBrains environments.
v1.2.19-vscode2 features15 fixesThis release introduces a new option to opt out of the Responses API and enhances provider compatibility by adding identification headers for OpenRouter. Numerous bug fixes address issues with model configuration, message ordering in Gemini, tool call handling, and ensuring provider settings like context length are correctly respected across various integrations.
v1.0.64-jetbrains3 features12 fixesThis release focuses heavily on provider stability and configuration accuracy, including respecting context length settings for vLLM and fixing message ordering issues for Gemini. New features include an option to opt out of the Responses API and enhanced header identification for OpenRouter providers.
v1.3.35-vscode2 features15 fixesThis release introduces a new option to opt out of the Responses API and enhances provider compatibility by adding necessary headers for OpenRouter identification. Numerous bug fixes address issues with model configuration, message ordering in Gemini, tool call handling, and resource management across various providers like Ollama and vLLM.
v1.2.18-vscode8 features17 fixesThis release introduces significant new capabilities, including support for Tensorix and MiniMax as new LLM providers, and enhanced CLI functionality for invoking skills and managing sessions. Numerous bug fixes address issues with terminal link resolution, tool calling across various models (Ollama, OpenAI), and improved configuration loading. Users will also benefit from updated default LLM settings and enhanced error handling across the platform.
v1.0.63-jetbrains7 features15 fixesThis release introduces significant expansion in LLM provider support, adding Tensorix and enabling Gemini via the AI SDK, alongside Bedrock API key authentication. Key fixes address issues with terminal link resolution, system message parsing, and improved stability for various models like DeepSeek and Ollama. Users will also benefit from new CLI capabilities, including session import/export and invokable skills.
v1.3.34-vscode7 features21 fixesThis release introduces significant expansion in LLM provider support by adding Tensorix and MiniMax, alongside support for Bedrock API key authentication. Key fixes address issues with terminal link resolution, improved tool calling reliability across various models (including Ollama and OpenAI), and enhanced stability for local configurations. Users will also benefit from new CLI capabilities like session import/export and invokable skills.
v1.2.17-vscode12 features9 fixesThis release focuses heavily on model integration and stability, introducing support for Claude Sonnet/Opus 4.6, z.ai, and enabling ai-sdk for XAI and Deepseek. Key improvements include turn-level prompt caching, background execution for bash tools, and numerous security dependency upgrades to resolve vulnerabilities.
v1.0.62-jetbrains12 features11 fixesThis release focuses heavily on model integration and stability, introducing support for Claude Sonnet/Opus 4.6, z.ai, and enabling ai-sdk for XAI and Deepseek. Key new capabilities include background execution for bash tools and a new hooks system for CLI event interception. Numerous fixes address API error handling, security vulnerabilities (including path traversal), and improve the reliability of publishing workflows.
v1.3.33-vscode14 features11 fixesThis release focuses heavily on model integration and performance, introducing support for Claude Sonnet/Opus 4.6, z.ai, and enabling ai-sdk integration for providers like xai and deepseek. Key enhancements include turn-level prompt caching and background execution for bash tools. Several security fixes were also implemented, including resolving minimatch vulnerabilities and adding checks to the file creation tool.