Continue chat
ComponentUpdates related to the chat component of Continue.
All CHAT Features
- Introduced support for subagents.(v1.2.15-vscode)
- Enabled agent skills functionality.(v1.2.15-vscode)
- Added a button to remove rules.(v1.2.15-vscode)
- Introduced resubmission logic for overloaded errors.(v1.2.15-vscode)
- Enabled CLI support for the Ask Sage provider.(v1.2.15-vscode)
- Enabled CLI to detect WSL and spawn the appropriate shell.(v1.2.15-vscode)
- Indicated excluded tools in the Model Context Protocol (MCP) server using a tools badge.(v1.0.60-jetbrains)
- Introduced support for OpenRouter Provider with dynamic model loading.(v1.0.60-jetbrains)
- Added support for Nous Research Hermes models as a provider.(v1.0.60-jetbrains)
- Enabled tool prompt override support via the .continuerc.json configuration file.(v1.0.60-jetbrains)
- Added a message counter display on the history page.(v1.0.60-jetbrains)
- Created invokable markdown prompts instead of requiring separate .prompt files.(v1.0.60-jetbrains)
- Registered the AI agent code check functionality as a new CLI subcommand, later renamed to `cn review`.(v1.0.60-jetbrains)
- Indicated excluded tools in the Model Context Protocol (MCP) server tools badge.(v1.3.31-vscode)
- Added support for OpenRouter Provider with dynamic model loading.(v1.3.31-vscode)
- Added support for Nous Research Hermes models as a provider.(v1.3.31-vscode)
- Enabled tool prompt override support in .continuerc.json configuration.(v1.3.31-vscode)
- Introduced the ability to create invokable markdown prompts instead of relying on .prompt files.(v1.3.31-vscode)
- Added a message counter display in the history page.(v1.3.31-vscode)
- Registered `cn check` as a CLI subcommand (later renamed to `cn review`).(v1.3.31-vscode)
- Added support for Nous Research Hermes models as a new provider.(@continuedev/config-yaml@1.42.0)
- Introduced OpenRouter Provider Support, enabling dynamic model loading.(@continuedev/config-yaml@1.42.0)
- Added support for tool prompt overrides within the .continuerc.json configuration file.(@continuedev/config-yaml@1.41.0)
- Added new OVHcloud models for use with the tool.(@continuedev/config-yaml@1.40.0)
- Introduced agent skills support in the CLI.(@continuedev/config-yaml@1.40.0)
- Enabled CLI to detect WSL environments and spawn the appropriate shell.(@continuedev/config-yaml@1.40.0)
- Enabled editing of documentation directly.(@continuedev/config-yaml@1.40.0)
- Added informative error message instead of truncating when encountering very large files.(@continuedev/config-yaml@1.40.0)
- Implemented proportional output truncation for read and bash operations.(@continuedev/config-yaml@1.40.0)
- Introduced support for subagents.(v1.0.59-jetbrains)
- Enabled agent skills functionality.(v1.0.59-jetbrains)
- Added a button to remove rules.(v1.0.59-jetbrains)
- Introduced support for new OVHcloud models.(v1.0.59-jetbrains)
- Enabled resubmission of overloaded errors.(v1.0.59-jetbrains)
- Added CLI support for the Ask Sage provider.(v1.0.59-jetbrains)
- Enabled CLI to detect WSL and spawn the appropriate shell.(v1.0.59-jetbrains)
- Introduced support for subagents.(v1.3.30-vscode)
- Enabled agent skills functionality.(v1.3.30-vscode)
- Added the ability to resubmit overloaded errors.(v1.3.30-vscode)
- Introduced a 'remove rule' button.(v1.3.30-vscode)
- Added support for Ask Sage provider in the Continue CLI.(v1.3.30-vscode)
- Enabled CLI to detect WSL and spawn the appropriate shell.(v1.3.30-vscode)
- Added new OVHcloud models support.(v1.3.30-vscode)
- Enabled faster Grok code processing (grok code fast 1).(v1.2.14-vscode)
- Introduced the ability to skip checking for updates when running in development mode in the CLI.(v1.2.14-vscode)
- Enabled always showing the context percentage in the CLI.(v1.2.14-vscode)
- Added a warning message to show configured Mission Control Providers (MCPs).(v1.2.14-vscode)
- Removed applied rules from the chat history display.(v1.2.14-vscode)
- Added reporting for tool failures.(v1.2.14-vscode)
- Enabled filtering CLI output lines that exceed 1000 characters.(v1.2.14-vscode)
- Kept the stream error toggle open by default.(v1.2.14-vscode)
- Designed thinking blocks to occupy less screen space.(v1.2.14-vscode)
- Added Sonnet lazy apply prompt functionality.(v1.2.14-vscode)
- Added reusable Continue Agents workflow for enhanced automation.(@continuedev/openai-adapters@1.37.0)
- Introduced a staging blueprint (cn-staging) for isolated testing environments.(@continuedev/openai-adapters@1.37.0)
- Enabled selecting images using Cmd+A shortcut.(@continuedev/openai-adapters@1.37.0)
- Added the Xiaomi MiMo logo to assets.(@continuedev/openai-adapters@1.37.0)
- Introduced the new MiMo-V2-Flash model.(@continuedev/openai-adapters@1.37.0)
- Refined CLI bash tool truncation and added related documentation.(@continuedev/openai-adapters@1.37.0)
- Enabled case insensitive matching strategy for find and replace operations.(@continuedev/config-yaml@1.38.0)
All CHAT Bug Fixes
- Fixed filtering logic to exclude .md files when loading agent configurations.(v1.2.15-vscode)
- Simplified configuration error messages.(v1.2.15-vscode)
- Fixed issue where duplicate tool messages were being added.(v1.2.15-vscode)
- Prevented waiting for the session to load from history.(v1.2.15-vscode)
- Fixed display issue where the full right side gradient border was shown incorrectly.(v1.2.15-vscode)
- Resolved documentation tab navigation issues.(v1.2.15-vscode)
- Fixed command syntax in the contributing file.(v1.2.15-vscode)
- Fixed autocompaction failures that occurred when the context length was exceeded.(v1.2.15-vscode)
- Fixed tool call parsing to correctly support object-type arguments.(v1.2.15-vscode)
- Set the isComplete metadata correctly when agents finish execution.(v1.2.15-vscode)
- Set isComplete=true after an agent turn ends without tool calls.(v1.2.15-vscode)
- Fixed command wrapping issue for MCP servers when a Windows host connects to WSL.(v1.2.15-vscode)
- Fixed flaky hub loader tests in the CLI.(v1.2.15-vscode)
- Fixed local setup screen where input text was black on a dark background.(v1.2.15-vscode)
- Fixed flaky hub loader tests in the CLI.(v1.0.60-jetbrains)
- Fixed local setup screen input text appearing black on dark backgrounds.(v1.0.60-jetbrains)
- Fixed passing the GITHUB_TOKEN to VS Code end-to-end tests for ripgrep download.(v1.0.60-jetbrains)
- Fixed stream errors by using the underlying provider name.(v1.0.60-jetbrains)
- Fixed WSL compatibility issues by passing pre-read content to the RegistryClient.(v1.0.60-jetbrains)
- Fixed shell PATH resolution by correctly detecting the WSL remote environment.(v1.0.60-jetbrains)
- Fixed a file descriptor leak related to resource monitoring (lsof).(v1.0.60-jetbrains)
- Fixed silent failures for `cn check` workers (now `cn review`).(v1.0.60-jetbrains)
- Fixed OpenAI Responses API parallel tool calls losing call_ids.(v1.0.60-jetbrains)
- Fixed OpenAI timeout settings not being applied correctly.(v1.0.60-jetbrains)
- Fixed configuration loading not crashing when one block fails.(v1.0.60-jetbrains)
- Fixed showing errors for blank inputs when using hub blocks in local configuration.(v1.0.60-jetbrains)
- Fixed retrieving the reasoning toggle state from local storage.(v1.0.60-jetbrains)
- Fixed adding the reasoning content field support in the chat body.(v1.0.60-jetbrains)
- Fixed better error messages for invalid rule files.(v1.0.60-jetbrains)
- Fixed preventing non-whitespace characters from being sent from input fields.(v1.0.60-jetbrains)
- Fixed flaky hub loader tests in the CLI.(v1.3.31-vscode)
- Fixed local setup screen input text appearing black on dark backgrounds.(v1.3.31-vscode)
- Fixed passing GITHUB_TOKEN to VS Code end-to-end tests for ripgrep download.(v1.3.31-vscode)
- Fixed stream errors by using the underlying provider name.(v1.3.31-vscode)
- Fixed passing pre-read content to RegistryClient for better WSL compatibility.(v1.3.31-vscode)
- Fixed shell PATH resolution by correctly detecting WSL remote environments.(v1.3.31-vscode)
- Fixed an lsof file descriptor leak related to resource monitoring.(v1.3.31-vscode)
- Fixed OpenAI Responses API parallel tool calls losing call_ids.(v1.3.31-vscode)
- Fixed OpenAI timeout settings not being applied correctly.(v1.3.31-vscode)
- Fixed configuration loading not crashing when a single block fails.(v1.3.31-vscode)
- Fixed showing errors for blank inputs when using hub blocks in local configuration.(v1.3.31-vscode)
- Fixed OpenAI tool calls losing call_ids during parallel execution.(v1.3.31-vscode)
- Fixed reasoning toggle not being retrieved correctly from local storage.(v1.3.31-vscode)
- Fixed issues where `cn check` workers were failing silently (now renamed to `cn review`).(v1.3.31-vscode)
- Fixed preventing non-whitespace characters from being sent from the input field.(v1.3.31-vscode)
- Fixed configuration loading to not crash due to one block failure.(v1.3.31-vscode)
- Fixed improving error messages for invalid rule files.(v1.3.31-vscode)
- Fixed issue where reasoning content field was not supported in the chat body.(@continuedev/config-yaml@1.42.0)
- Enabled SSL verification option for client transports in the CLI.(@continuedev/config-yaml@1.42.0)
- Fixed issue where the reasoning toggle state was not being correctly retrieved from local storage.(@continuedev/config-yaml@1.42.0)
- Resolved error handling for blank inputs when using hub blocks in local configuration.(@continuedev/config-yaml@1.42.0)
- Fixed issue where OpenAI Responses API parallel tool calls were losing call_ids.(@continuedev/config-yaml@1.41.0)
- Resolved incorrect shell PATH resolution when operating in a WSL remote environment.(@continuedev/config-yaml@1.41.0)
- Prevented file descriptor count leaks in resource monitoring by implementing caching.(@continuedev/config-yaml@1.41.0)
- Fixed resource monitoring service initialization to correctly set file descriptor check time and count upon startup.(@continuedev/config-yaml@1.41.0)
- Prevented negative file descriptor counts from being reported by the ResourceMonitoringService.(@continuedev/config-yaml@1.41.0)
- Fixed issues related to accounting for different tokenizers.(@continuedev/config-yaml@1.40.0)
- Added missing cancelStream call and return logic for non-retryable errors.(@continuedev/config-yaml@1.40.0)
- Now shows the underlying provider name when opening a GitHub issue.(@continuedev/config-yaml@1.40.0)
- Fixed flaky hub loader tests in the CLI.(@continuedev/config-yaml@1.40.0)
- Corrected a command issue in the contributing file.(@continuedev/config-yaml@1.40.0)
- Fixed conversation compaction when dealing with dangling tool calls.(@continuedev/config-yaml@1.40.0)
- Addressed issues related to duplicate tool messages.(@continuedev/config-yaml@1.40.0)
- Fixed ESLint errors, including issues with unresolved imports and missing parallel counts.(@continuedev/config-yaml@1.40.0)
- Fixed local setup screen where input text was black on a dark background.(@continuedev/config-yaml@1.40.0)
- Passed GITHUB_TOKEN to VS Code E2E tests to ensure ripgrep download works.(@continuedev/config-yaml@1.40.0)
- Improved WSL compatibility by passing pre-read content to RegistryClient.(@continuedev/config-yaml@1.40.0)
- Prevented waiting for the session to load from history unnecessarily.(@continuedev/config-yaml@1.40.0)
- Replaced console.debug with logger in the exit tool.(@continuedev/config-yaml@1.40.0)
- Fixed navigation issues in the IDE Extensions tab within documentation.(@continuedev/config-yaml@1.40.0)
- Resolved an MDX parsing error in the run-agents-locally guide.(@continuedev/config-yaml@1.40.0)
- Set isComplete metadata correctly when agents finish execution.(@continuedev/config-yaml@1.40.0)
- Fixed skipping of cmd.exe wrapping for MCP servers when a Windows host connects to WSL.(@continuedev/config-yaml@1.40.0)
- Enabled support for object-type arguments in tool call parsing.(@continuedev/config-yaml@1.40.0)
- Updated tool permissions for MCP and bash in headless mode.(@continuedev/config-yaml@1.40.0)
- Used ide.runCommand when a Windows host connects to WSL to resolve connection issues.(@continuedev/config-yaml@1.40.0)
- Used the underlying provider name when reporting stream errors.(@continuedev/config-yaml@1.40.0)
- Fixed duplicate tool messages being added to the conversation.(v1.0.59-jetbrains)
- Prevented waiting for the session to load from history during startup.(v1.0.59-jetbrains)
- Resolved issues with documentation tab navigation.(v1.0.59-jetbrains)
- Fixed command syntax in the contributing file.(v1.0.59-jetbrains)
- Fixed autocompaction failures that occurred when context length was exceeded.(v1.0.59-jetbrains)
- Fixed tool call parsing to correctly support object-type arguments.(v1.0.59-jetbrains)
- Fixed setting of isComplete metadata when agents finish execution.(v1.0.59-jetbrains)
- Fixed setting isComplete=true after an agent turn ends without tool calls.(v1.0.59-jetbrains)
- Fixed skipping cmd.exe wrapping for MCP servers when a Windows host connects to WSL.(v1.0.59-jetbrains)
- Fixed duplicate tool messages being added to the conversation.(v1.3.30-vscode)
- Fixed an issue preventing waiting for the session to load from history.(v1.3.30-vscode)
- Fixed display issue showing the full right side gradient border.(v1.3.30-vscode)
- Resolved documentation tab navigation issues.(v1.3.30-vscode)
- Fixed command execution when a Windows host connects to WSL by using ide.runCommand.(v1.3.30-vscode)
- Fixed conversation compaction issues caused by dangling tool calls.(v1.3.30-vscode)
- Fixed autocompaction failures that occurred from exceeding context length.(v1.3.30-vscode)
- Fixed tool call parsing to correctly support object-type arguments.(v1.3.30-vscode)
- Fixed metadata setting to ensure isComplete is set when agents finish execution.(v1.3.30-vscode)
- Fixed metadata setting to ensure isComplete=true after an agent turn ends without tool calls.(v1.3.30-vscode)
- Fixed skipping cmd.exe wrapping for MCP servers when a Windows host connects to WSL.(v1.3.30-vscode)
- Fixed issues related to tokenizers by ensuring correct accounting for different tokenizers.(@continuedev/fetch@1.8.0)
- Resolved an issue where the GH_TOKEN was not being added to the continue-agents workflow, addressing issue #9493.(@continuedev/fetch@1.8.0)
- Added missing cancelStream call and return logic for non-retryable errors to prevent hangs.(@continuedev/fetch@1.8.0)
- Fixed compaction issues for missing tool results in the CLI.(@continuedev/fetch@1.8.0)
- Expanded CLI model capability detection to correctly include Llama, Nemotron, and Mistral models, addressing issue #8845 and #1.(@continuedev/fetch@1.8.0)
- Resolved a circular dependency issue within the CLI's uploadArtifact tool.(@continuedev/fetch@1.8.0)
- Fixed an issue where WSL workspace paths were not being correctly decoded from URI encoding.(@continuedev/fetch@1.8.0)
- Prevented fallback to relative paths when the file is not a markdown file.(@continuedev/fetch@1.8.0)
- Fixed duplicate tool messages being displayed.(@continuedev/fetch@1.8.0)
- Ensured cross-target LanceDB binaries are correctly copied, addressing issue #9100.(@continuedev/fetch@1.8.0)
- Fixed a linting issue related to missing parallel count.(@continuedev/fetch@1.8.0)
- Merged yaml.schemas settings, addressing issue #7080.(@continuedev/fetch@1.8.0)
- Fixed multi-turn tools test API initialization timing in openai-adapters.(@continuedev/fetch@1.8.0)
- Fixed tool_choice format and usage token handling in openai-adapters.(@continuedev/fetch@1.8.0)
- Fixed usage token double-emission when using Vercel SDK streams in openai-adapters.(@continuedev/fetch@1.8.0)
- Fixed Vercel SDK test API initialization timing in openai-adapters.(@continuedev/fetch@1.8.0)
- Fixed context length errors, truncation, and related issues.(@continuedev/fetch@1.8.0)
- Fixed an issue where the tree-sitter lookup in .js and .ts files was incorrectly picking up comments.(@continuedev/fetch@1.8.0)
- Prevented ConcurrentModificationException when accessing keymap in the IntelliJ extension.(@continuedev/fetch@1.8.0)
- Fixed context pollution between sessions in the CLI.(v1.2.14-vscode)
- Fixed an issue where 'instant reject all' did not work correctly in VS Code.(v1.2.14-vscode)
- Fixed typechecking for write file arguments.(v1.2.14-vscode)
- Fixed the apply prompt to correctly preserve comments in code changes.(v1.2.14-vscode)
- Fixed CLI validation for required parameters in tool calls.(v1.2.14-vscode)
- Fixed an issue where the prepended fim_prefix tag was incorrectly used in the mercury coder.(v1.2.14-vscode)
- Fixed a loading bug that occurred in headless mode.(v1.2.14-vscode)
- Prevented sensitive files from being included in next edit diffs.(v1.2.14-vscode)
- Fixed the command title from 'View History' to 'View Logs'.(v1.2.14-vscode)
- Fixed Mission Control Processor (MCP) error output and related bugs.(v1.2.14-vscode)
- Fixed an auto compaction loop in the CLI and ensured messages are pruned until valid.(v1.2.14-vscode)
- Fixed the use of system prompts for default endpoint type instances in the next edit operation.(v1.2.14-vscode)
- Fixed missing hub mock rejection in tests.(@continuedev/openai-adapters@1.37.0)
- Ensured environment variables are used in the Write job summary step.(@continuedev/openai-adapters@1.37.0)
- Changed the introductory message label from 'Agent:' to 'Config:' for clarity.(@continuedev/openai-adapters@1.37.0)
- Clarified that editing cannot occur in parallel with itself.(@continuedev/openai-adapters@1.37.0)
- Addressed CLI compaction for missing tool results.(@continuedev/openai-adapters@1.37.0)
- Implemented CLI restart mechanism for manual updates.(@continuedev/openai-adapters@1.37.0)
- Improved detection and handling of context length errors.(@continuedev/openai-adapters@1.37.0)
- Implemented context length fixes and truncation logic.(@continuedev/openai-adapters@1.37.0)
- Prevented fallback to relative paths when the file is not markdown.(@continuedev/openai-adapters@1.37.0)
- Fixed lint errors and updated tests related to markdown-only fallback.(@continuedev/openai-adapters@1.37.0)
- Injected blocks tests for CLI secret resolution.(@continuedev/openai-adapters@1.37.0)
- Reduced vulnerabilities in package.json and package-lock.json for continue-sdk.(@continuedev/openai-adapters@1.37.0)
- Prevented string interpolation issues in remaining workflow steps.(@continuedev/openai-adapters@1.37.0)
- Fixed an issue related to hub error handling for non-markdown hub slugs.(@continuedev/openai-adapters@1.37.0)
- Regenerated package-lock.json to include the missing @types/node@25.0.3.(@continuedev/openai-adapters@1.37.0)
- Removed symlink logic from both production and staging blueprint templates.(@continuedev/openai-adapters@1.37.0)
- Ensured a newly created rule is immediately shown.(@continuedev/openai-adapters@1.37.0)
- Used environment variables for safe string handling in the continue-agents workflow.(@continuedev/openai-adapters@1.37.0)
- Implemented proxy usage for unrendered injected block secrets.(@continuedev/openai-adapters@1.37.0)
- Fixed issues related to context length, truncation, and usage token handling in OpenAI adapters, including defensive type checks and fallbacks for stream usage.(@continuedev/config-yaml@1.38.0)
- Resolved issues with tool_choice format and usage token emission in OpenAI streams, specifically addressing double-emission in Vercel SDK streams.(@continuedev/config-yaml@1.38.0)
- Fixed an issue where JSON contents of create_new_file operations were not handled correctly.(@continuedev/config-yaml@1.38.0)
- Fixed a ConcurrentModificationException when accessing keymaps in the IntelliJ extension.(@continuedev/config-yaml@1.38.0)
- Expanded model capability detection in the CLI to correctly include Llama, Nemotron, and Mistral models.(@continuedev/config-yaml@1.38.0)
- Fixed path conversion issues (path to URI and vice versa) within the CLI.(@continuedev/config-yaml@1.38.0)
- Ensured cross-target LanceDB binaries are correctly copied during builds.(@continuedev/config-yaml@1.38.0)
- Fixed logic so that tree-sitter lookup in .js and .ts files now correctly picks up only the last preceding comment before a code block.(@continuedev/config-yaml@1.38.0)
- Fixed an issue where the system would fallback to a relative path even when the file was not markdown.(@continuedev/config-yaml@1.38.0)
Releases with CHAT Changes
v1.2.15-vscode6 features14 fixesThis release introduces major new capabilities, including support for subagents and agent skills, enhancing automation possibilities. Several critical bugs were fixed, such as issues with tool call parsing, conversation compaction, and WSL connectivity for Windows hosts. Additionally, the /info command now provides usage statistics, and new OVHcloud models have been added.
v1.0.60-jetbrains7 features16 fixesThis release introduces significant new capabilities, including support for the OpenRouter provider and Nous Research Hermes models, alongside the ability to override tool prompts via configuration. Several critical bugs were resolved, particularly around WSL compatibility, OpenAI API calls, and configuration loading stability. Additionally, the AI agent code checking feature has been refined and renamed to `cn review`.
v1.3.31-vscode7 features17 fixesThis release introduces significant new capabilities, including support for the OpenRouter provider and Nous Research Hermes models, alongside the ability to override tool prompts in configuration. Several critical bugs were resolved, particularly around WSL compatibility, configuration loading robustness, and fixing issues with OpenAI parallel tool calls. Additionally, the `cn check` CLI tool has been renamed to `cn review` with associated UX enhancements.
@continuedev/config-yaml@1.42.02 features4 fixesThis release introduces significant expansion in model support by adding Nous Research Hermes models and enabling OpenRouter integration with dynamic model loading. Several bugs were addressed, including better support for reasoning content in the chat body and improved CLI transport security via SSL verification.
@continuedev/config-yaml@1.41.01 feature5 fixesThis release introduces the ability to override tool prompts directly in your .continuerc.json configuration file, offering greater control over tool behavior. Several critical bug fixes address issues with OpenAI parallel tool calls and improve the stability of resource monitoring by preventing file descriptor leaks.
@continuedev/config-yaml@1.40.06 features21 fixesThis release focuses heavily on stability and environment compatibility, particularly around WSL connections and tool execution, by fixing numerous bugs related to tokenizers, error handling, and command execution. New features include support for agent skills in the CLI and the addition of new OVHcloud models.
v1.0.59-jetbrains7 features9 fixesThis release introduces significant new capabilities, including support for subagents and agent skills, enhancing the tool's autonomy and customization. Several bugs related to tool message duplication, session loading, and configuration errors have been resolved, alongside improvements to headless mode and WSL connectivity.
v1.3.30-vscode7 features11 fixesThis release introduces significant new capabilities, including support for subagents and agent skills, enhancing automation workflows. Several critical bugs related to tool message duplication, session loading, and WSL connectivity have been resolved. Additionally, the /info command now provides usage statistics, and new OVHcloud models are now supported.
@continuedev/fetch@1.8.019 fixesThis release focuses heavily on stability and correctness, particularly around model interaction and CLI functionality. Key fixes include improved handling for various tokenizers, expanded model detection for Llama, Nemotron, and Mistral in the CLI, and numerous stability improvements within the openai-adapters for token usage tracking.
v1.2.14-vscode10 features12 fixesThis release focuses heavily on improving the Command Line Interface (CLI) experience, including better context visibility and stability fixes. New features include faster Grok processing and UI refinements like smaller thinking blocks. Numerous bug fixes address issues related to context handling, prompt application, and headless mode stability.
@continuedev/openai-adapters@1.37.06 features19 fixesThis release introduces a reusable Continue Agents workflow and a new staging blueprint (cn-staging) for isolated testing. Key fixes focus on improving context length handling, resolving CLI update issues, and enhancing secret resolution security. Dependencies like mocha and monaco-editor have also been upgraded.
@continuedev/config-yaml@1.38.01 feature9 fixesThis release focuses heavily on stabilizing the OpenAI adapters, improving token usage tracking across various streaming scenarios, and fixing several CLI detection and path resolution issues. Key fixes include better handling of context length and ensuring compatibility with newer models like Llama and Mistral.