Cline tools
ComponentUpdates related to the tools component of Cline.
All TOOLS Features
- Added the experimental "Lazy Teammate Mode" toggle.(v3.77.0)
- Enabled the `read_file` tool to support chunked reading for more targeted file access.(v3.77.0)
- Added Cline Kanban launch modal in the webview, and the CLI now launches Kanban by default with a migration view.(v3.76.0)
- Added a toggle to disable feature tips within the chat interface.(v3.76.0)
- Added repeated tool call loop detection to prevent infinite loops that waste tokens.(v3.76.0)
- Implement dynamic free model detection for the Cline API(v3.74.0)
- Add feature tips tooltip displayed during the thinking state(v3.74.0)
- Added W&B Inference by CoreWeave as a new API provider supporting 17 models(v3.73.0)
- Added Anthropic Opus 4.6 fast mode variants(v3.72.0)
- Added support for skills and optional modelId in subagent configuration(v3.67.0)
- Introduced AgentConfigLoader for loading agent configurations from files(v3.67.0)
- Enabled Responses API support for the OpenAI native provider(v3.67.0)
- Added /q command to quit the Command Line Interface (CLI)(v3.67.0)
- Added dynamic flag to adjust banner cache duration(v3.67.0)
- Introduced native `use_subagents` tool to replace legacy subagents.(v3.58.0)
- Enabled support for bundling `endpoints.json` so packaged distributions can ship required endpoints.(v3.58.0)
- Added support for parallel tool calling with Amazon Bedrock.(v3.58.0)
- Introduced a new experimental "double-check completion" feature to verify work before marking tasks complete.(v3.58.0)
- Added new task controls/flags to the CLI, including custom `--thinking` token budget and `--max-consecutive-mistakes` for yolo runs.(v3.58.0)
- Added new UI/options (including connection/test buttons) and support for syncing deletion of remotely configured MCP servers via remote config.(v3.58.0)
- Enabled 1M context model options for Claude Opus 4.6 in Vertex / Claude Code.(v3.58.0)
- Added support for ZAI/GLM: GLM-5.(v3.58.0)
- Introduced Cline CLI 2.0, installable via `npm install -g cline`(v3.57.0)
- Enabled access to Anthropic Opus 4.6 model(v3.57.0)
- Enabled access to Minimax-2.1 and Kimi-k2.5 models for a limited-time free promotion(v3.57.0)
- Enabled access to Codex-5.3 model for ChatGPT subscribers(v3.57.0)
- Added support for the gpt-5.2-codex OpenAI model.(v3.50.0)
- Introduced a new create-pull-request skill for automated PR generation.(v3.50.0)
- Enabled phased rollout of Responses API usage instead of defaulting for every supported model.(v3.49.1)
- Added experimental support for Background Edits, allowing file editing without opening the diff view(v3.47.0)
- Updated the free model to MiniMax M2.1, replacing MiniMax M2(v3.47.0)
- Added support for Azure based identity authentication in OpenAI Compatible provider and Azure OpenAI(v3.47.0)
- Added the `supportsReasoning` property to Baseten models(v3.47.0)
- Added support for the new GLM 4.7 model.(v3.46.0)
- Introduced the Apply Patch tool for GPT-5+ models, replacing previous diff edit tools.(v3.46.0)
- Enhanced background terminal execution with command tracking, log file output, zombie process prevention (10-minute timeout), and clickable log paths in the UI.(v3.46.0)
All TOOLS Bug Fixes
- Fixed an issue where the `new_task` tool was incorrectly included in the system prompt when running in yolo/headless mode.(v3.77.0)
- Corrected formatting issues in the Kanban demo video.(v3.77.0)
- Fixed an issue preventing CLI Kanban from spawning correctly on Windows by enabling shell mode for 'npx.cmd'.(v3.76.0)
- Replaced the generic error message shown when a user is not logged into Cline with a more specific message(v3.74.0)
- Fixed alignment issue in ClineRulesToggleModal padding to match ServersToggleModal(v3.74.0)
- Skipped WebP format processing for GLM and Devstral models running via llama.cpp(v3.74.0)
- Ensured user-configured context window is respected in LiteLLM getModel() calls(v3.74.0)
- Ensured explicit model IDs outside the static catalog are honored in the W&B provider(v3.74.0)
- Added missing Fireworks serverless models and their associated pricing information(v3.74.0)
- Fixed crashes in the Claude Code Provider when encountering rate limit events, empty content arrays, error results, or unknown content types(v3.73.0)
- Ensured tool handlers (read_file, list_files, list_code_definition_names, search_files) return graceful errors instead of crashing(v3.73.0)
- Resolved native tool placeholder interpolation issues within prompts(v3.72.0)
- Fixed Gemini Flash output token limit to 8192 across all providers(v3.72.0)
- Fixed unit test path normalization issues specifically on Windows(v3.72.0)
- Resolved flaky hooks tests occurring on Windows environments(v3.72.0)
- Fixed Bedrock to correctly handle thinking and redacted_thinking blocks during message conversion and streaming(v3.72.0)
- Prevented crashes when the list_files or list_code_definition_names functions receive an invalid file path(v3.72.0)
- Fixed a crash related to reasoning delta when processing usage-only stream chunks(v3.67.0)
- Corrected an issue where OpenAI tool ID transformation was incorrectly restricted to the native provider only(v3.67.0)
- Fixed authentication check failure when operating in ACP mode(v3.67.0)
- Resolved an issue where the CLI yolo setting was being incorrectly persisted to disk(v3.67.0)
- Fixed the inline focus-chain slider to correctly operate within its feature row(v3.67.0)
- Ensured compatibility with Gemini 3.1 Pro(v3.67.0)
- Corrected authentication issues when using the Cline CLI with the ACP flag(v3.67.0)
- Fixed CLI to correctly handle stdin redirection in CI/headless environments.(v3.58.0)
- Fixed CLI to preserve OAuth callback paths during auth redirects.(v3.58.0)
- Fixed OAuth callback reliability in VS Code Web by generating callback URLs via `vscode.env.asExternalUri`.(v3.58.0)
- Fixed Terminal to surface command exit codes in results and improved long-running `execute_command` timeout behavior.(v3.58.0)
- Fixed UI rendering by adding a loading indicator and fixing `api_req_started` rendering.(v3.58.0)
- Prevented duplicate streamed text rows after task completion during task streaming.(v3.58.0)
- Fixed API to preserve the selected Vercel model when model metadata is missing.(v3.58.0)
- Ensured telemetry flushes on shutdown and routed PostHog networking through proxy-aware shared fetch.(v3.58.0)
- Increased Windows E2E test timeout to reduce flakiness in CI.(v3.58.0)
- Fixed issue where the read file tool failed to support reading large files(v3.57.0)
- Resolved crash occurring when entering decimal input in OpenAI Compatible price fields (#8129)(v3.57.0)
- Corrected build complete handlers when updating the API configuration(v3.57.0)
- Fixed missing provider from the model list(v3.57.0)
- Fixed issue where the Favorite Icon/Star was getting clipped in the task history view(v3.57.0)
- Fixed an issue where remotely configured providers were not being selected correctly.(v3.50.0)
- Resolved a bug where act_mode_respond could result in consecutive, unnecessary calls.(v3.50.0)
- Fixed incorrect tool call IDs when switching between different model formats.(v3.50.0)
- Fixed workflow slash command search to be case-insensitive.(v3.49.1)
- Fixed incorrect model display in ModelPickerModal when using LiteLLM.(v3.49.1)
- Fixed LiteLLM model fetching when using the default base URL.(v3.49.1)
- Fixed crash that occurred when OpenAI-compatible APIs sent usage chunks with empty or null choices arrays at the end of streaming.(v3.49.1)
- Fixed incorrect model ID for the Kat Coder Pro Free model.(v3.49.1)
- Fixed issue where expired tokens were being used in authenticated requests(v3.47.0)
- Fixed diff view to correctly exclude binary files that lack file extensions(v3.47.0)
- Fixed issue where file endings and trailing newlines were not being preserved(v3.47.0)
- Fixed rate limiting issues specific to the Cerebras provider(v3.47.0)
- Fixed Auto Compact functionality for the Claude Code provider(v3.47.0)
- Made Workspace and Favorites history filters operate independently(v3.47.0)
- Fixed remote MCP server connection failures, specifically handling 404 responses(v3.47.0)
- Disabled native tool calling for the Deepseek 3.2 speciale model(v3.47.0)
- Fixed the Baseten model selector interface(v3.47.0)
- Fixed duplicate error messages that occurred during streaming for the Diff Edit tool when Parallel Tool Calling was disabled.(v3.46.0)
- Corrected typos found in Gemini system prompt overrides.(v3.46.0)
- Resolved issues with the model picker favorites ordering, star toggle functionality, and keyboard navigation for OpenRouter and Vercel AI Gateway providers.(v3.46.0)
- Fixed an issue where remote configuration values were not being fetched from the cache.(v3.46.0)
Releases with TOOLS Changes
v3.77.02 features2 fixesThis release introduces the experimental "Lazy Teammate Mode" toggle and enhances file reading capabilities with chunked support in the `read_file` tool. Several minor issues were resolved, including cleaning up system prompt usage in headless mode and polishing notification hook functionality.
v3.76.03 features1 fixThis release introduces the Cline Kanban launch modal in the webview and sets Kanban as the default launch experience for the CLI. Key improvements include new loop detection to save tokens and a setting to disable feature tips in the chat.
v3.74.02 features6 fixesThis release introduces dynamic free model detection for the Cline API and adds helpful feature tips tooltips during the thinking state. Several bugs were fixed, including improving error messaging for logged-out users and ensuring correct context window handling in LiteLLM. Performance is also improved via a new file read deduplication cache.
v3.73.01 feature2 fixesThis release introduces W&B Inference by CoreWeave as a new supported API provider with 17 models. Key fixes address stability issues in the Claude Code Provider and ensure tool handlers return graceful errors instead of crashing. Parallel tool calling support has also been improved for OpenRouter and Cline.
v3.72.01 feature6 fixesThis release introduces support for the new Anthropic Opus 4.6 fast mode variants, expanding model options for users. Several critical bugs were fixed, including issues with native tool placeholder interpolation and stability problems on Windows. Additionally, user experience improvements include requiring consent for Markdown image loading.
v3.67.05 features7 fixesThis release introduces significant configuration flexibility by adding support for skills and file-based agent configurations via the new AgentConfigLoader. Key fixes address stability issues, including a reasoning delta crash and compatibility problems with Gemini 3.1 Pro. Users will also benefit from reduced response latency due to websocket preconnection.
v3.58.08 features9 fixesThis release introduces significant new capabilities, including native subagent support, parallel tool calling for Amazon Bedrock, and an experimental double-check feature for task verification. Several critical bugs were fixed across the CLI, VS Code Web, and Terminal, alongside UX improvements like moving reasoning effort into model settings.
v3.57.04 features5 fixesThis release introduces Cline CLI 2.0 and expands model availability, adding Anthropic Opus 4.6, Minimax-2.1, Kimi-k2.5, and Codex-5.3. Several critical bugs were fixed, including issues with reading large files and crashes in price input fields, alongside a UX change making skills permanently enabled.
v3.50.02 features3 fixesThis release introduces significant new capabilities, including support for the gpt-5.2-codex model and a new create-pull-request skill. Several critical bugs related to provider selection and tool call handling have also been resolved.
v3.49.11 feature5 fixesThis release focuses primarily on stability and compatibility fixes, addressing issues with model selection, streaming API responses, and improving the search functionality for workflow slash commands. It also introduces a phased rollout for the Responses API usage.
v3.47.04 features9 fixesThis release introduces experimental Background Edits for non-blocking file modifications and updates the free model to MiniMax M2.1. Several stability fixes address issues with token usage, provider rate limiting (Cerebras), and connection handling for remote MCP servers.
v3.46.03 features4 fixesThis release introduces the new GLM 4.7 model and the Apply Patch tool for GPT-5+ models. We have significantly enhanced background terminal execution capabilities and resolved several bugs related to model picker navigation and error message duplication.