v1.0.64-jetbrains
ContinueSummary
This release focuses heavily on provider stability and configuration accuracy, including respecting context length settings for vLLM and fixing message ordering issues for Gemini. New features include an option to opt out of the Responses API and enhanced header identification for OpenRouter providers.
New Features
- Added the `useResponsesApi` option to allow users to opt out of the Responses API.
- Enabled OpenRouter to send HTTP-Referer and X-Title headers to help identify the application.
- Added support for Gemini 3 tools, suffix stripping, thought signature, and the autocomplete endpoint for the OpenRouter provider.
Bug Fixes
- Removed the Llama 3.1 405B model from the Groq provider.
- Fixed an issue where the Gemini provider merged consecutive same-role messages, causing ordering errors.
- Fixed an issue where tool argument values (MCP tool args) were not being coerced to match the required string types in the schema.
- Fixed mapping for Moonshot models to use the `reasoning_content` field instead of `content`.
- Fixed an issue where the thinking model returned no text content, causing tool calls to be lost.
- Fixed an issue where the CLI would stop polling the free-trial status for models that are not in a free-trial period.
- Removed inline backtick fences from tool instruction prose to clean up output.
- Fixed an issue where the thinking indicator was incorrectly shown when there was no thinking content.
- Fixed listener leaks and redundant file reads occurring during autocomplete operations.
- Allowed users to correct an API key after entering an invalid one for xAI/Gemini providers.
- Showed an actionable error message when Ollama fails to parse tool calls.
- Fixed an issue where the vLLM provider did not respect the user-configured `contextLength` and model settings.
Improvements
- Added `keepAlive` configuration support to YAML completion options schema.
- Ensured the model name is included in the completion request body for llama.cpp.
- Ensured the `contextLength` specified in the YAML model configuration is respected across providers.
- Lazy-loaded the Ollama /api/show endpoint to reduce unnecessary initial requests.
- Ensured installation steps are not skipped by default and the lock file is synchronized.
- Allowed multiple context providers of the same type to be configured in `config.yaml`.
- Removed Llama 3.1 405B from the Groq provider.
- Fixed an issue where inline backtick fences were present in tool instruction prose.
- Handled multiple zip files correctly during the JetBrains release artifact creation step.
Related Documentation
- https://github.com/continuedev/continue/pull/11804
- https://github.com/continuedev/continue/pull/11801
- https://github.com/continuedev/continue/pull/11803
- https://github.com/continuedev/continue/pull/11806
- https://github.com/continuedev/continue/pull/11807
- https://github.com/continuedev/continue/pull/11805
- https://github.com/continuedev/continue/pull/11834
- https://github.com/continuedev/continue/pull/11836
- https://github.com/continuedev/continue/pull/11837
- https://github.com/continuedev/continue/pull/11838
- https://github.com/continuedev/continue/pull/11839
- https://github.com/continuedev/continue/pull/11840
- https://github.com/continuedev/continue/pull/11842
- https://github.com/continuedev/continue/pull/11809
- https://github.com/continuedev/continue/pull/11849
- https://github.com/continuedev/continue/pull/11850
- https://github.com/continuedev/continue/pull/11846
- https://github.com/continuedev/continue/pull/11852
- https://github.com/continuedev/continue/pull/11847
- https://github.com/continuedev/continue/pull/11844
- https://github.com/continuedev/continue/pull/11845
- https://github.com/continuedev/continue/pull/11853
- https://github.com/continuedev/continue/pull/11855
- https://github.com/continuedev/continue/pull/11859
- https://github.com/continuedev/continue/pull/11857
- https://github.com/continuedev/continue/pull/11860
- https://github.com/continuedev/continue/pull/11862
- https://github.com/continuedev/continue/pull/11215
- https://github.com/continuedev/continue/pull/11869
- https://github.com/continuedev/continue/pull/11873
- https://github.com/continuedev/continue/pull/11874
- https://github.com/continuedev/continue/pull/11856
- https://github.com/continuedev/continue/pull/11863
- https://github.com/continuedev/continue/pull/11864
- https://github.com/continuedev/continue/pull/11865
- https://github.com/continuedev/continue/pull/11866
- https://github.com/continuedev/continue/pull/11854
- https://github.com/continuedev/continue/pull/11870
- https://github.com/continuedev/continue/pull/11868
- https://github.com/continuedev/continue/compare/v1.0.63-jetbrains...v1.0.64-jetbrains