Change8

v1.2.19-vscode

Continue
2 features15 fixes6 improvementsautocompletecontextjetbrainsmodelsvscode

Summary

This release introduces a new option to opt out of the Responses API and enhances provider compatibility by adding identification headers for OpenRouter. Numerous bug fixes address issues with model configuration, message ordering in Gemini, tool call handling, and ensuring provider settings like context length are correctly respected across various integrations.

New Features

  • Added `useResponsesApi` option to allow users to opt out of the Responses API.
  • Enabled OpenRouter to send HTTP-Referer and X-Title headers to identify the application when making requests.

Bug Fixes

  • Removed Llama 3.1 405B from the Groq provider.
  • Fixed an issue where Gemini merged consecutive same-role messages, causing ordering errors.
  • Fixed an issue where tool arguments (MCP tool args) were not being coerced to match schema string types.
  • Fixed mapping of `reasoning-delta` to `reasoning_content` instead of `content` for certain models.
  • Fixed an issue preventing multiple context providers of the same type from being configured in `config.yaml`.
  • Stopped CLI free-trial polling for models that are not in a free-trial state.
  • Removed inline backtick fences from tool instruction prose.
  • Fixed handling of multiple zip files during the JetBrains release artifact creation step.
  • Fixed hiding the thinking indicator when the thinking content is empty.
  • Fixed OpenRouter support for Gemini 3, including suffix stripping, `thought_signature`, and the autocomplete endpoint.
  • Fixed listener leaks and redundant file reads occurring during autocomplete operations.
  • Fixed preserving tool calls when thinking models return no text content.
  • Fixed allowing users to correct an API key after entering an invalid one for xAI/Gemini providers.
  • Fixed showing an actionable error when Ollama fails to parse tool calls.
  • Fixed ensuring the vLLM provider respects the user-configured `contextLength` and model settings.

Improvements

  • Added support for the `reasoning_content` field for Kimi models in the Moonshot provider.
  • Ensured the `contextLength` specified in YAML model configuration is respected.
  • Lazy-loaded the Ollama /api/show endpoint to reduce unnecessary initial requests.
  • Ensured installation steps are not skipped by default and the lock file is synchronized.
  • Added documentation clarifying where secrets can be templated from.
  • Added troubleshooting documentation specifically for Ollama memory errors.

Continue Documentation

Continue v1.2.19-vscode - What's New - Change8