Local Model Configuration
Local agents are managed through YAML configuration files. The user-level configuration file is located at:
- macOS / Linux:
~/.vjsp/config.yaml - Windows:
%USERPROFILE%\.vjsp\config.yaml
To edit this file, click the settings icon next to “Local Config” in the configuration dropdown menu at the top-right corner of the IDE chat input box. This will open config.yaml.
Agent = Models + Rules + MCP Servers
Creating a Local Model
In the IDE, select Local Agent, then click the settings button next to it to open
config.yaml.
Add prompt exampleAdd your model configuration to the file and save it to activate the settings.
- name: Qwen3-Coder-30B-A3B-Instruct
provider: openai
model: Qwen3-Coder-30B-A3B-Instruct
apiBase: http://www.example.com/v1
capabilities:
- tool_use
roles:
- chat
- edit
- applyTop-Level Structure of the Configuration File
Below is a description of all configurable top-level properties in config.yaml. Unless explicitly marked as required, all properties are optional.
Top-level attributes in config.yaml include:
name: My Config # Required: Configuration name
version: 1.0.0 # Required: Version number
schema: v1 # Required: Schema version
models: # Optional: Model definitions
context: # Optional: Context providers
rules: # Optional: System rules
prompts: # Optional: Invocable prompts
docs: # Optional: Documentation site indexes
mcpServers: # Optional: MCP tool servers
data: # Optional: Development data reporting destinationsmodels
This section defines the language models used in the configuration. Models can enable functionalities such as conversation, code editing, and content summarization.
Attribute Reference:
name(Required): A unique identifier for the model within the configuration.provider(Required): The model provider (e.g.,openai).model(Required): The specific model name (e.g.,Qwen3-Coder-30B-A3B-Instruct).apiBase: Overrides the default base API URL for the model.roles: An array specifying the roles the model can assume, includingchat(conversation),autocomplete(code completion),embed(embedding),rerank(re-ranking),edit(editing), andapply(execution). Default value is[chat, edit, apply].capabilities: A string array declaring model capabilities, overriding auto-detected capabilities based on provider and model.
tool_use: Enables function/tool calling capability (required for Agent mode).image_input: Enables image upload and processing.Most model capabilities are auto-detected. Manual override is useful for custom deployments or when auto-detection fails.
maxStopWords: Maximum number of stop words allowed to prevent API errors due to overly long lists.promptTemplates: Overrides default prompt templates for different model roles. Supported values includechat,edit,apply, andautocomplete. Thechatproperty must reference a valid template name (e.g.,llama3oranthropic).chatOptions: Applied if the model includes thechatrole. Settings affect both Chat and Agent modes:baseSystemMessage: Overrides the default system prompt for Chat mode.basePlanSystemMessage: Overrides the default system prompt for Plan mode.baseAgentSystemMessage: Overrides the default system prompt for Agent mode.
embedOptions: Applied if the model includes theembedrole:maxChunkSize: Maximum token count per document chunk (minimum: 128 tokens).maxBatchSize: Maximum number of chunks per request (minimum: 1).
defaultCompletionOptions: Default generation settings for the model:contextLength: Maximum context length (typically in tokens).maxTokens: Maximum tokens generated per request.temperature: Controls randomness (0.0 = deterministic, 1.0 = random).topP: Nucleus sampling cumulative probability threshold.topK: Number of top tokens considered at each step.stop: Array of stop sequences to terminate generation.
requestOptions: HTTP request settings specific to the model:timeout: Timeout per LLM request.verifySsl: Whether to verify SSL certificates.caBundlePath: Path to a custom CA bundle.proxy: Proxy URL for HTTP requests.headers: Custom HTTP headers.extraBodyProperties: Additional properties merged into the request body.noProxy: Hostnames bypassing the proxy.clientCertificate: Client certificate for HTTPS requests:cert: Path to certificate file.key: Path to private key file.passphrase: Optional passphrase for the key.
autocompleteOptions: Applied if the model includes theautocompleterole:disable: Iftrue, disables autocomplete for this model.maxPromptTokens: Max tokens in the autocomplete prompt.debounceDelay: Delay before triggering autocomplete (ms).modelTimeout: Model request timeout for autocomplete (ms).maxSuffixPercentage: Max percentage of suffix in prompt.prefixPercentage: Percentage allocated to prefix.transform: Iffalse, disables trimming of multi-line completions (default:true). Useful for models that generate better multi-line completions without post-processing.template: Custom Mustache template using variables:{{{ prefix }}}, {{{ suffix }}}, {{{ filename }}}, {{{ reponame }}}, {{{ language }}}onlyMyCode: Iftrue, only includes code from the current repository.useCache: Iftrue, enables caching of completions.useImports: Iftrue, includes import statements in context.useRecentlyEdited: Iftrue, includes recently edited files.useRecentlyOpened: Iftrue, includes recently opened files.
Example:
name: My Config
version: 1.0.0
schema: v1
models:
- name: Qwen3-Coder-30B-A3B-Instruct
provider: openai
model: Qwen3-Coder-30B-A3B-Instruct
apiBase: http://www.example.com/v1
capabilities:
- tool_use
roles:
- chat
- edit
- apply
defaultCompletionOptions:
temperature: 0.7
maxTokens: 1500
- name: Qwen2-5-VL-32B-Instruct
provider: openai
model: Qwen2-5-VL-32B-Instruct
apiBase: http://www.example.com/v1
capabilities:
- tool_use
- image_input
roles:
- chat
- edit
- apply
defaultCompletionOptions:
temperature: 0.3context
Defines context providers that supply additional information to language models. Each provider can have custom parameters.
Attribute Reference:
provider(Required): Identifier of the context provider (e.g.,code,docs,web).name: Optional display name for the provider.params: Optional parameters to configure provider behavior.
Example:
name: My Config
version: 1.0.0
schema: v1
context:
- provider: file
- provider: code
- provider: diff
- provider: http
name: Context Server
params:
url: "https://api.example.com/server"
- provider: terminalrules
Rules are appended to the system prompt in all Agent, Chat, and Edit mode requests.
Configuration Example:
name: My Config
version: 1.0.0
schema: v1
rules:
- uses: sanity/sanity-opinionated # Rule file stored in Task Control Center
- uses: file://user/Desktop/rules.md # Local rule fileRule File Example:
---
name: Language Style Rule (Pirate rule)
---
Respond in ChineseSee Rules Deep Dive for more details.
prompts
Prompts can be invoked using the / command.
Configuration Example:
name: My Config
version: 1.0.0
schema: v1
prompts:
- uses: supabase/create-functions # Prompt stored in Task Control Center
- uses: file://user/Desktop/prompts.md # Local prompt filePrompt File Example (prompts.md):
---
name: Generate Chinese Comments
invokable: true
---
Rewrite all comments in the current file into ChineseSee Prompts Deep Dive for more details.
docs
Specifies documentation sites to be indexed.
Attribute Reference:
name(Required): Display name of the documentation site.startUrl(Required): Starting URL for the crawler (usually the root or intro page).favicon: URL to the site’s favicon (defaults to/favicon.icounderstartUrl).useLocalCrawling: Iftrue, skips the default crawler and uses only local crawling.
Example:
name: My Config
version: 1.0.0
schema: v1
docs:
- name: VJSP Official Docs
startUrl: https://docs.VJSP.dev/intro
favicon: https://docs.VJSP.dev/favicon.icomcpServers
The Model Context Protocol (MCP), proposed by Anthropic, standardizes prompts, context, and tool usage. All MCP servers are supported via MCP context providers.
Attribute Reference:
name(Required): Name of the MCP server.command(Required): Command to launch the server.args: Optional command arguments.env: Optional environment variables for the server process.cwd: Optional working directory (absolute or relative path).requestOptions: Optional request settings for SSE/HTTP servers (same format as modelrequestOptions).connectionTimeout: Optional initial connection timeout.
Example:
name: My Config
version: 1.0.0
schema: v1
mcpServers:
- name: My MCP Server
command: uvx
args:
- mcp-server-sqlite
- --db-path
- ./test.db
cwd: /Users/NAME/project
env:
NODE_ENV: productiondata
Configures destinations for development data reporting.
Attribute Reference:
name(Required): Display name of the data destination.destination(Required): Endpoint receiving data, supporting two types:HTTP endpoint: Receives POST requests with JSON data.
File URL: Directory path where events are stored as
.jsonlfiles.
schema(Required): Schema version for JSON data (0.1.0or0.2.0).events: Array of event names to include (defaults to all if unspecified).level: Predefined field filter level:all: Includes all fields.noCode: Excludes code-related data (file content, prompts, generations).
Default:all.
apiKey: API key sent via Bearer header.requestOptions: Request settings for POST events (same format as modelrequestOptions).
Example:
name: My Config
version: 1.0.0
schema: v1
data:
- name: Local Data Repository
destination: file:///Users/dallin/Documents/code/VJSPdev/VJSP-extras/external-data
schema: 0.2.0
level: all
- name: Enterprise Private Endpoint
destination: https://mycompany.com/ingest
schema: 0.2.0
level: noCode
events:
- autocompleteComplete YAML Configuration Example
Below is a full config.yaml integrating all components:
name: My Config
version: 1.0.0
schema: v1
models:
- name: Qwen3-Coder-30B-A3B-Instruct
provider: openai
model: Qwen3-Coder-30B-A3B-Instruct
roles:
- chat
- edit
defaultCompletionOptions:
temperature: 0.5
maxTokens: 2000
requestOptions:
headers:
Authorization: Bearer YOUR_OPENAI_API_KEY
- name: Qwen2-5-VL-32B-Instruct
provider: openai
model: Qwen2-5-VL-32B-Instruct
roles:
- autocomplete
autocompleteOptions:
debounceDelay: 350
maxPromptTokens: 1024
onlyMyCode: true
defaultCompletionOptions:
temperature: 0.3
stop:
- "\n"
rules:
- Responses must be concise and clear.
- Prefer TypeScript over JavaScript by default.
prompts:
- name: Unit Test Generator
description: Generate unit tests for functions
prompt: |
Write a complete unit test suite for this function using Jest.
Cover all edge cases thoroughly, and add explanations for each test case.
- uses: myprofile/my-favorite-prompt
context:
- provider: diff
- provider: file
- provider: code
mcpServers:
- name: Dev Server
command: npm
args:
- run
- dev
env:
PORT: "3000"
data:
- name: Enterprise Private Endpoint
destination: https://mycompany.com/ingest
schema: 0.2.0
level: noCode
events:
- autocomplete
- chatInteractionUsing YAML Anchors to Avoid Repetition
%YAML 1.1
---
name: My Config
version: 1.0.0
schema: v1
model_defaults: &model_defaults
provider: openai
apiKey: my-api-key
apiBase: https://api.example.com/llm
models:
- name: qwen2.5-coder-7b-instruct
<<: *model_defaults
model: qwen2.5-coder-7b-instruct
roles:
- chat
- edit
- name: qwen2.5-coder-7b
<<: *model_defaults
model: qwen2.5-coder-7b
useLegacyCompletionsEndpoint: false
roles:
- autocomplete
autocompleteOptions:
debounceDelay: 350
maxPromptTokens: 1024
onlyMyCode: true