Change8

v1.2.18-vscode

Continue
8 features17 fixes9 improvementsautocompletechatjetbrainsmodelsvscode

Summary

This release introduces significant new capabilities, including support for Tensorix and MiniMax as new LLM providers, and enhanced CLI functionality for invoking skills and managing sessions. Numerous bug fixes address issues with terminal link resolution, tool calling across various models (Ollama, OpenAI), and improved configuration loading. Users will also benefit from updated default LLM settings and enhanced error handling across the platform.

New Features

  • Added Tensorix as a new supported LLM provider.
  • Introduced MiniMax as a new LLM provider, defaulting to M2.7.
  • Enabled support for Gemini models through the AI SDK.
  • Added Bedrock API key authentication support.
  • Enabled command-line interface (CLI) to discover and use .continue/checks/ in code review contexts.
  • Enabled CLI users to invoke skills directly and import any skill.
  • Enabled CLI users to export and import chat sessions.
  • Added Qwen multi-file FIM template for improved repository-level autocompletion.

Bug Fixes

  • Fixed terminal links opening incorrect URLs when addresses contained ports.
  • Fixed removal of hardcoded Unix $ prompt prefix from the terminal UI.
  • Fixed loading of rules into the system message context.
  • Strengthened the default Apply prompt used for local models.
  • Fixed skipping of remote URIs when resolving the Multi-Cloud Platform (MCP) server current working directory (cwd).
  • Fixed inclusion of the reasoning_content field for DeepSeek Reasoner models.
  • Fixed incorrect location used for the CLI configuration file in the registry client.
  • Fixed system-message tools parser when a tool call was non-terminal.
  • Fixed Ollama MCP tool calling specifically for Mistral and Gemma3 models.
  • Fixed 'No chat model selected' error appearing on startup.
  • Fixed preservation of indentation when applying code edits to Python files.
  • Fixed use of the config.yaml name for the default Local Config profile.
  • Fixed OpenAI Responses API 400 errors related to reasoning, tool calls, and ID handling.
  • Fixed updates to the Gemini model catalog to reflect retired and new models.
  • Fixed tool support expansion to reduce 'Invalid tool name' errors.
  • Fixed handling of thinking/assistant messages appearing at the end of the chat history.
  • Restricted terminal childProcess.spawn to local-only environments for security.

Improvements

  • Hardened system message tools and wired toolOverrides to the system message path.
  • Improved SSL certificate troubleshooting guidance documentation.
  • Documented all CLI slash commands available in TUI mode.
  • Updated default LLM configurations: removed Gemini 2.0 Flash and updated Claude defaults to 4.6.
  • Improved error handling UX and moved stream retry logic to the BaseLLM class.
  • Updated Node.js LTS version to v20.20.1.
  • Fixed stale Models documentation links displayed in the configuration panel.
  • Ensured Azure hosted Anthropic service sends x-api-key instead of api-key as a header.
  • Added a default timeout for terminal command tool execution.

Continue Documentation

Continue v1.2.18-vscode - What's New - Change8