Change8

v1.3.34-vscode

Continue
7 features21 fixes8 improvementsautocompletechatjetbrainsmodelsvscode

Summary

This release introduces significant expansion in LLM provider support by adding Tensorix and MiniMax, alongside support for Bedrock API key authentication. Key fixes address issues with terminal link resolution, improved tool calling reliability across various models (including Ollama and OpenAI), and enhanced stability for local configurations. Users will also benefit from new CLI capabilities like session import/export and invokable skills.

New Features

  • Added Tensorix as a new LLM provider.
  • Enabled discovery of .continue/checks/ in continuous integration (CN) review mode via the CLI.
  • Introduced invokable skills and the ability to import any skill via the CLI.
  • Enabled export and import functionality for chat sessions via the CLI.
  • Added support for Qwen multi-file FIM template for repository-level autocompletion.
  • Enabled support for Gemini models through the AI SDK.
  • Added support for Bedrock API key authentication.

Bug Fixes

  • Fixed terminal links opening incorrect URLs when addresses contained ports.
  • Fixed removal of the hardcoded Unix $ prompt prefix from the terminal UI.
  • Fixed loading of rules into the system message.
  • Strengthened the default Apply prompt used for local models.
  • Fixed skipping remote URIs when resolving the Multi-Cloud Platform (MCP) server current working directory (cwd).
  • Fixed inclusion of the reasoning_content field for DeepSeek Reasoner models.
  • Added a default timeout for terminal command tool execution.
  • Fixed incorrect location resolution for the CLI configuration file in the registry client.
  • Fixed system-message tools parser when a tool call was non-terminal.
  • Fixed Ollama MCP tool calling behavior for Mistral and Gemma3 models.
  • Fixed OpenAI Responses API 400 errors related to reasoning, tool calls, and ID handling.
  • Fixed update of the model catalog for retired and new Gemini models.
  • Fixed expansion of tool support to reduce 'Invalid tool name' errors.
  • Fixed handling of thinking/assistant messages appearing at the end of chat history.
  • Fixed guarding against a non-string value for the NO_PROXY environment variable.
  • Fixed restriction of terminal childProcess.spawn to local-only environments.
  • Fixed preserving indentation when applying code edits to Python files.
  • Fixed using the config.yaml name for the default Local Config profile.
  • Fixed an error where the system message tools parser was hardened and toolOverrides were wired to the system message path.
  • Fixed a "No chat model selected" error occurring on startup.
  • Fixed Ollama tool calling for specific models.

Improvements

  • Improved SSL certificate troubleshooting guidance documentation.
  • Improved documentation coverage for AskQuestion tool behavior and TUI interaction.
  • Improved documentation for all CLI slash commands when in TUI mode.
  • Improved error handling user experience (UX) and moved stream retry logic to BaseLLM.
  • Updated default LLM configurations: removed Gemini 2.0 Flash and updated Claude defaults to 4.6.
  • Updated JetBrains and VS Code compatibility versions.
  • Updated Node.js LTS to v20.20.1.
  • Fixed stale Models documentation links in the configuration panel.

Continue Documentation

Continue v1.3.34-vscode - What's New - Change8