Claude‑friendly edit tools + framed transport + live Unity NL test framework (#243)
* CI: gate desktop-parity on Anthropic key; pass anthropic_api_key like NL suite
* Add quickprobe prompt and CI workflow (mcp-quickprobe.md, unity-mcp-quickprobe.yml)
* strictier tool use to prevent subagent spawning and force mcp tools
* update workflow filesto reduce likelihood of subagent spawning
* improve permissions for claude agent, fix mcpbridge timeout/token issue
* increase max turns to 10
* ci: align NL suite to new permissions schema; prevent subagent drift
* ci: NL suite -> mini prompt for e2e; add full NL/T prompt; server: ctx optional + project_root fallback; workflows: set UNITY_PROJECT_ROOT for CI
* ci: add checks:write; revert local project hardcodes (manifest, ProjectVersion.txt)
* tools: text-edit routing fixes (anchor_insert via text, CRLF calc); prompts: mini NL/T clarifications
* ci: use absolute UNITY_PROJECT_ROOT; prompts target TestProjects; server: accept relative UNITY_PROJECT_ROOT and bare spec URI
* ci: ignore Unity test project's packages-lock.json; remove from repo to avoid absolute paths
* CI: start persistent Unity Editor for MCP (guarded by license) + allow batch-mode bridge via UNITY_MCP_ALLOW_BATCH
* CI: hide license and pass via env to docker; fix invalid ref format
* CI: readiness probe uses handshake on Unity MCP port (deterministic)
* CI: fix YAML; use TCP handshake readiness probe (FRAMING=1)
* CI: prime Unity license via game-ci; mount ULF into container; extend readiness timeout
* CI: use ULF write + mount for Unity licensing; remove serial/email/pass from container
* CI: entitlement activation (UNITY_SERIAL=''); verify host ULF cache; keep mount
* CI: write ULF from secret and verify; drop entitlement activation step
* CI: detect any licensing path; GameCI prime; status dir env; log+probe readiness; fix YAML
* CI: add GameCI license prime; conditional ULF write; one-shot license validation; explicit status dir + license env
* CI: fix YAML (inline python), add Anthropic key detect via GITHUB_ENV; ready to run happy path
* CI: mount Unity token/ulf/cache dirs into container to share host license; create dirs before start
* CI: fix YAML indentation; write ULF on host; activate in container with shared mounts; mount .config and .cache too
* CI: gate Claude via outputs; mount all Unity license dirs; fix inline probe python; stabilize licensing flow
* CI: normalize detect to step outputs; ensure license dirs mounted and validated; fix indentation
* Bridge: honor UNITY_MCP_STATUS_DIR for heartbeat/status file (CI-friendly)
* CI: guard project path for activation/start; align tool allowlist; run MCP server with python; tighten secret scoping
* CI: finalize Unity licensing mounts + status dir; mode-detect (ULF/EBL); readiness logs+probe; Claude gating via outputs
* CI: fix YAML probe (inline python -c) and finalize happy-path Unity licensing and MCP/Claude wiring
* CI: inline python probe; unify Unity image and cache mounts; ready to test
* CI: fix docker run IMAGE placement; ignore cache find perms; keep same editor image
* CI: pass -manualLicenseFile to persistent Editor; keep mounts and single image
* CI: mount full GameCI cache to /root in persistent Unity; set HOME=/root; add optional license check
* CI: make -manualLicenseFile conditional; keep full /root mount and license check
* CI: set HOME=/github/home; mount GameCI cache there; adjust manualLicenseFile path; expand license check
* CI: EBL sign-in for persistent Unity (email/password/serial); revert HOME=/root and full /root mount; keep conditional manualLicenseFile and improved readiness
* CI: run full NL/T suite prompt (nl-unity-suite-full.md) instead of mini
* NL/T: require unified diffs + explicit verdicts in JUnit; CI: remove short sanity step, publish JUnit, upload artifacts
* NL/T prompt: require CDATA wrapping for JUnit XML fields; guidance for splitting embedded ]]>; keep VERDICT in CDATA only
* CI: remove in-container license check step; keep readiness and full suite
* NL/T prompt: add version header, stricter JUnit schema, hashing/normalization, anchors, statuses, atomic semantics, tool logging
* CI: increase Claude NL/T suite timeout to 30 minutes
* CI: pre-create reports dir and result files to avoid tool approval prompts
* CI: skip wait if container not running; skip Editor start if project missing; broaden MCP deps detection; expand allowed tools
* fixies to harden ManageScript
* CI: sanitize NL/T markdown report to avoid NUL/encoding issues
* revert breaking yyaml changes
* CI: prime license, robust Unity start/wait, sanitize markdown via heredoc
* Resolve merge: accept upstream renames/installer (fix/installer-cleanup-v2) and keep local framing/script-editing
- Restored upstream server.py, EditorWindow, uv.lock\n- Preserved ManageScript editing/validation; switched to atomic write + debounced refresh\n- Updated tools/__init__.py to keep script_edits/resources and adopt new logger name\n- All Python tests via uv: 7 passed, 6 skipped, 9 xpassed; Unity compile OK
* Fix Claude Desktop config path and atomic write issues
- Fix macOS path for Claude Desktop config: use ~/Library/Application Support/Claude/ instead of ~/.config/Claude/
- Improve atomic write pattern with backup/restore safety
- Replace File.Replace() with File.Move() for better macOS compatibility
- Add proper error handling and cleanup for file operations
- Resolves issue where installer couldn't find Claude Desktop config on macOS
* Editor: use macConfigPath on macOS for MCP client config writes (Claude Desktop, etc.). Fallback to linuxConfigPath only if mac path missing.
* Models: add macConfigPath to McpClient for macOS config path selection (fixes CS1061 in editor window).
* Editor: on macOS, prefer macConfigPath in ManualConfigEditorWindow (fallback to linux path); Linux/Windows unchanged.
* Fix McpClient: align with upstream/main, prep for framing split
* NL suite: shard workflow; tighten bridge readiness; add MCP preflight; use env-based shard vars
* NL suite: fix shard step indentation; move shard vars to env; remove invalid inputs
* MCP clients: split VSCode Copilot config paths into macConfigPath and linuxConfigPath
* Unity bridge: clean stale status; bind host; robust wait probe with IPv4/IPv6 + diagnostics
* CI: use MCPForUnity.Editor.MCPForUnityBridge.StartAutoConnect as executeMethod
* Action wiring: inline mcpServers in settings for all shards; remove redundant .claude/mcp.json step
* CI: embed mcpServers in settings for all shards; fix startup sanity step; lint clean
* CI: pin claude-code-base-action to e6f32c8; use claude_args --mcp-config; switch to allowed_tools; ensure MCP config per step
* CI: unpin claude-code-base-action to @beta (commit ref not found)
* CI: align with claude-code-base-action @beta; pass MCP via claude_args and allowedTools
* Editor: Fix apply_text_edits heuristic when edits shift positions; recompute method span on candidate text with fallback delta adjustment
* CI: unify MCP wiring across workflows; write .claude/mcp.json; switch to claude_args with --mcp-config/--allowedTools; remove unsupported inputs
* CI: collapse NL suite shards into a single run to avoid repeated test execution
* CI: minimize allowedTools for NL suite to essential Unity MCP + Bash("git:*") + Write
* CI: mkdir -p reports before run; remove unsupported --timeout-minutes from claude_args
* CI: broaden allowedTools to include find_in_file and mcp__unity__*
* CI: enable use_node_cache and switch NL suite model to claude-3-7-haiku-20250219
* CI: disable use_node_cache to avoid setup-node lockfile error
* CI: set NL suite model to claude-3-haiku-20240307
* CI: cap Haiku output with --max-tokens 2048 for NL suite
* CI: switch to claude-3-7-sonnet-latest and remove unsupported --max-tokens
* CI: update allowedTools to Bash(*) and explicit Unity MCP tool list
* CI: update NL suite workflow (latest tweaks)
* Tests: tighten NL suite prompt for logging, hash discipline, stale retry, evidence windows, diff cap, and VERDICT line
* Add disallowed tools to NL suite workflow
* docs: clarify stale write retry
* Add fallback JUnit report and adjust publisher
* Indent fallback JUnit XML in workflow
* fix: correct fallback JUnit report generation
* Update mcp-quickprobe.md
Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
* Update mcp-quickprobe.md
Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
* Update Response.cs
Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
* Update MCPForUnityBridge.cs
Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
* fix: correct McpTypes reference
* Add directory existence checks for symlink and XDG paths
* fix: only set installation flag after successful server install
* Update resource_tools.py
Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
* fix: respect mac config paths
* Use File.Replace for atomic config write
* Remove unused imports in manage_script
* bump server version
* Tests: update NL suite prompt and workflows; remove deprecated smoke/desktop-parity; quickprobe tidy
* Editor: atomic config write via File.Replace fallback; remove redundant backups and racey exists checks
* CI: harden NL suite - idempotent docker, gate on unity_ok, safer port probe, least-priv perms
* Editor: make atomic config write restoration safe (flag writeDone; copy-overwrite restore; cleanup in finally)
* CI: fix fallback JUnit heredoc by using printf lines (no EOF delimiter issues)
* CI: switch NL suite to mini prompt; mini prompt honors / and NL discipline
* CI: replace claude_args with allowed_tools/model/mcp_config per action schema
* CI: expand UNITY_PROJECT_ROOT via in MCP config heredoc
* EditorWindow: add cross-platform fallback for File.Replace; macOS-insensitive PathsEqual; safer uv resolve; honor macConfigPath
* CI: strengthen JUnit publishing for NL mini suite (normalize, debug list, publish both, fail_on_parse_error)
* CI: set job-wide JUNIT_OUT/MD_OUT; normalization uses env; publish references env and ungroup reports
* CI: publish a single normalized JUnit (reports/junit-for-actions.xml); fallback writes same; avoid checkName/reportPaths mismatch
* CI: align mini prompt report filenames; redact Unity log tail in diagnostics
* chore: sync workflow and mini prompt; redacted logs; JUnit normalization/publish tweaks
* CI: redact sensitive tokens in Stop Unity; docs: CI usage + edit tools
* prompts: update nl-unity-suite-full (mini-style setup + reporting discipline); remove obsolete prompts
* CI: harden NL workflows (timeout_minutes, robust normalization); prompts: unify JUnit suite name and reporting discipline
* prompts: add guarded write pattern (LF hash, stale_file retry) to full suite
* prompts: enforce continue-on-failure, driver flow, and status handling in full suite
* Make test list more explicit in prompt. Get rid of old test prompts for hygeine.
* prompts: add stale fast-retry (server hash) + in-memory buf guidance
* CI: standardize JUNIT_OUT to reports/junit-nl-suite.xml; fix artifact upload indentation; prompt copy cleanups
* prompts: reporting discipline — append-only fragments, batch writes, no model round-trip
* prompts: stale fast-retry preference, buffer/sha carry, snapshot revert, essential logging
* workflows(nl-suite): precreate report skeletons, assemble junit, synthesize markdown; restrict allowed_tools to append-only Bash + MCP tools
* thsis too
* Update README-DEV.md
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
* Update .github/workflows/claude-nl-suite-mini.yml
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
* Update .github/workflows/claude-nl-suite.yml
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
* workflows(nl-mini): fix YAML indentation/trailing spaces under with: and cleanup heredoc spacing
* workflows(nl-suite): fix indentation on docker logs redaction line (YAML lint)
* Add write to allowlist
* nl-suite: harden reporting discipline (fragment-only writes, forbid alt paths); workflow: clean stray junit-*updated*.xml
* nl-suite: enforce end-of-suite single Write (no bash redirection); workflow: restrict allowed_tools to Write+MCP only
* prompts(nl-full): end-of-suite results must be valid XML with single <cases> root and only <testcase> children; no raw text outside CDATA
* workflows(nl-suite): make Claude step non-fatal; tolerant normalizer extracts <testcase> via regex on bad fragments
* nl-suite: fix stale classname to UnityMCP.NL-T in mini fallback; prompt: require re-read after every revert; correct PLAN/PROGRESS to 15
* nl-suite: fix fallback JUnit classname to UnityMCP.NL-T; prompt: forbid create_script and env/mkdir checks, enforce single baseline-byte revert flow and post-revert re-read; add corruption-handling guidance
* prompts(nl-full): after each write re-read raw bytes to refresh pre_sha; prefer script_apply_edits for anchors; avoid header/using changes
* prompts(nl-full): canonicalize outputs to /; allow small fragment appends via Write or Bash(printf/echo); forbid wrappers and full-file round-trips
* prompts(nl-full): finalize markdown formatting for guarded write, execution order, specs, status
* workflows(nl-suite, mini): header/lint fixes and constrained Bash append path; align allowed_tools
* prompts(nl-full): format Fast Restore, Guarded Write, Execution, Specs, Status as proper markdown lists and code fences
* workflows(nl-suite): keep header tidy and append-path alignment with prompt
* minor fix
* workflows(nl-suite): fix indentation and dispatch; align allowed_tools and revert helper
* prompts(nl-full): switch to read_resource for buf/sha; re-read only when needed; convert 'Print this once' to heading; note snapshot helper creates parent dirs
* workflows(nl-suite): normalize step removes bootstrap when real testcases present; recompute tests/failures
* workflows(nl-suite): enrich Markdown summary by extracting per-test <system-out> blocks (truncated)
* clarify prompt resilience instructions
* ci(nl-suite): revert prompt and workflow to known-good e0f8a72 for green run; remove extra MD details
* ci(nl-suite): minimal fixes — no-mkdir guard in prompt; drop bootstrap and recompute JUnit counts
* ci(nl-suite): richer JUnit→Markdown report (per-test system-out)
* Small guard to incorret asset read call.
* ci(nl-suite): refine MD builder — unescape XML entities, safe code fences, PASS/FAIL badges
* Update UnityMcpBridge/UnityMcpServer~/src/tools/resource_tools.py
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
* Update UnityMcpBridge/UnityMcpServer~/src/unity_connection.py
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
* Update UnityMcpBridge/UnityMcpServer~/src/tools/manage_script_edits.py
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
* Update .github/scripts/mark_skipped.py
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
* Update .github/scripts/mark_skipped.py
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
* Update .github/scripts/mark_skipped.py
Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
* server(manage_script): robust URI handling — percent-decode file://, normalize, strip host/leading slashes, return Assets-relative if present
* Update .claude/prompts/nl-unity-suite-full.md
Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
* tests(framing): reduce handshake poll window, nonblocking peek to avoid disconnect race; still enforce pre-handshake data drop
* tests(manage_script): add _split_uri tests for unity://path, file:// URLs (decoded/Assets-relative), and plain paths
* server+tests: fix handshake syntax error; robust file:// URI normalization in manage_script; add _split_uri tests; adjust stdout scan to ignore venv/site-packages
* bridge(framing): accept zero-length frames (treat as empty keepalive)
* tests(logging): use errors='replace' on decode fallback to avoid silent drops
* resources(list): restrict to Assets/, resolve symlinks, enforce .cs; add traversal/outside-path tests
* Update .claude/prompts/nl-unity-suite-full.md
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
* Update UnityMcpBridge/UnityMcpServer~/src/unity_connection.py
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
* misc: framing keepalive (zero-length), regex preview consistency, resource.list hardening, URI parsing, legacy update routing, test cleanups
* docs(tools): richer MCP tool descriptions; tests accept decorator kwargs; resource URI parsing hardened
* Update .claude/prompts/nl-unity-suite-full.md
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
* Update UnityMcpBridge/UnityMcpServer~/src/tools/resource_tools.py
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
* Update UnityMcpBridge/UnityMcpServer~/src/unity_connection.py
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
* net+docs: hard-reject zero-length frames; TCP_NODELAY on connect; Assets detection case-insensitive; NL prompt statuses aligned
* prompt(nl-suite): constrain Write destinations under reports/, forbid traversal
* prompt+net: harden Write path rules; use monotonic deadline and plain-text advisory for non-framed peers
* unity_connection: restore recv timeout via try/finally; make global connection getter thread-safe with module lock and double-checked init
* NL/T prompt: pin structured edit ops for T-D/T-E; add schema-error guarded write behavior; keep existing path/URI and revert rules
* unity_connection: add FRAMED_MAX; use ValueError for framed length violations; lower framed receive log to debug; serialize connect() with per-instance lock
* ManageScript: use UTF8Encoding(without BOM) for atomic writes in ApplyTextEdits/EditScript to align with Create/Update and avoid BOM-related diffs/hash mismatches
* NL/T prompt: make helper deletion regex multiline-safe ((?ms) so ^ anchors line starts)
* ManageScript: emit structured overlap status {status:"overlap"} for overlapping edit ranges in apply_text_edits and edit paths
* NL/T prompt: clarify fallback vs failure — fallback only for unsupported/missing_field; treat bad_request as failure; note unsupported after fallback as failure
* NL/T prompt: pin deterministic overlap probe (apply_text_edits two ranges from same snapshot); gate too_large behind RUN_TOO_LARGE env hint
* TB update
* NL/T prompt: harden Output Rules — constrain Bash(printf|echo) to stdout-only; forbid redirection/here-docs/tee; only scripts/nlt-revert.sh may mutate FS
* Prompt: enumerate allowed script_apply_edits ops; add manage_editor/read_console guidance; fix T‑F atomic batch to single script_apply_edits. ManageScript: regex timeout for diagnostics; symlink ancestor guard; complete allowed-modes list.
* Fixes
* ManageScript: add rich overlap diagnostics (conflicts + hint) for both text range and structured batch paths
* ManageScript: return structured {status:"validation_failed"} diagnostics in create/update/edits and validate before commit
* ManageScript: echo canonical uri in responses (create/read/update/apply_text_edits/structured edits) to reinforce resource identity
* improve clarity of capabilities message
* Framing: allow zero-length frames on both ends (C# bridge, Python server). Prompt: harden T-F to single text-range apply_text_edits batch (descending order, one snapshot). URI: normalize file:// outside Assets by stripping leading slash.
* ManageScript: include new sha256 in success payload for apply_text_edits; harden TryResolveUnderAssets by rejecting symlinked ancestors up to Assets/.
* remove claudetest dir
* manage_script_edits: normalize method-anchored anchor_insert to insert_method (map text->replacement); improves CI compatibility for T‑A/T‑E without changing Editor behavior.
* tighten testing protocol around mkdir
* manage_script: validate create_script inputs (Assets/.cs/name/no traversal); add Assets/ guard to delete_script; validate level+Assets in validate_script; make legacy manage_script optional params; harden legacy update routing with base64 reuse and payload size preflight.
* Tighten prompt for testing
* Update .claude/prompts/nl-unity-suite-full.md
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
* Update .claude/prompts/nl-unity-suite-full.md
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
* Update UnityMcpBridge/Editor/Tools/ManageScript.cs
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
* manage_script_edits: honor ignore_case on anchor_insert and regex_replace in both direct and text-conversion paths (MULTILINE|IGNORECASE).
* remove extra file
* workflow: use python3 for inline scripts and port detection on ubuntu-latest.
* Tighten prompt + manage_script
* Update UnityMcpBridge/UnityMcpServer~/src/tools/manage_script_edits.py
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
* Update UnityMcpBridge/UnityMcpServer~/src/tools/manage_script_edits.py
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
* Update .claude/prompts/nl-unity-suite-full.md
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
* Update UnityMcpBridge/Editor/Tools/ManageScript.cs
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
* Update .claude/prompts/nl-unity-suite-full.md
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
* manage_script: improve file:// UNC handling; preserve POSIX absolute semantics internally; keep test-expected slash stripping for non-Assets paths.
* ManageScript.cs: add TimeSpan timeouts to all Regex uses (IsMatch/Match/new Regex) and keep CultureInvariant/Multiline options; reduces risk of catastrophic backtracking stalls.
* workflow: ensure reports/ exists in markdown build step to avoid FileNotFoundError when writing MD_OUT.
* fix brace
* manage_script_edits: expand backrefs for regex_replace in preview->text conversion and translate to \g<n> in local apply; keeps previews and actual edits consistent.
* anchor_insert: default to position=after, normalize surrounding newlines in Python conversion paths; C# path ensures trailing newline and skips duplicate insertion within class.
* feat(mcp): add get_sha tool; apply_text_edits normalization+overlap preflight+strict; no-op evidence in C#; update NL suite prompt; add unit tests
* feat(frames): accept zero-length heartbeat frames in client; add heartbeat test
* feat(edits): guard destructive regex_replace with structural preflight; add robust tests; prompt uses delete_method for temp helper
* feat(frames): bound heartbeat loop with timeout/threshold; align zero-length response with C#; update test
* SDK hardening: atomic multi-span text edits; stop forcing sequential for structured ops; forward options on apply_text_edits; add validate=relaxed support and scoped checks; update NL/T prompt; add tests for options forwarding, relaxed mode, and atomic batches
* Router: default applyMode=atomic for multi-span apply_text_edits; add tests
* CI prompt: pass options.validate=relaxed for T-B/C; options.applyMode=atomic for T-F; emphasize always writing testcase and restoring on errors
* Validation & DX: add validate=syntax (scoped), standardize evidence windows; early regex compile with hints; debug_preview for apply_text_edits
* Update UnityMcpBridge/Editor/Windows/MCPForUnityEditorWindow.cs
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
* NL/T suite-driven edits: LongUnityScriptClaudeTest, bridge helpers, server_version; prepare framing tests
* Fix duplicate macConfigPath field in McpClient to resolve CS0102
* Editor threading: run EnsureServerInstalled on main thread; marshal EditorPrefs/DeleteKey + logging via delayCall
* Docs(apply_text_edits): strengthen guidance on 1-based positions, verify-before-edit, and recommend anchors/structured edits
* Docs(script_apply_edits): add safety guidance (anchors, method ops, validators) and recommended practices
* Framed VerifyBridgePing in editor window; docs hardening for apply_text_edits and script_apply_edits
---------
Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
main
parent
22e8016aee
commit
f4712656fa
|
|
@ -0,0 +1,45 @@
|
|||
# Unity NL Editing Suite — Natural Mode
|
||||
|
||||
You are running inside CI for the **unity-mcp** repository. Your task is to demonstrate end‑to‑end **natural‑language code editing** on a representative Unity C# script using whatever capabilities and servers are already available in this session. Work autonomously. Do not ask the user for input. Do NOT spawn subagents, as they will not have access to the mcp server process on the top-level agent.
|
||||
|
||||
## Mission
|
||||
1) **Discover capabilities.** Quietly inspect the tools and any connected servers that are available to you at session start. If the server offers a primer or capabilities resource, read it before acting.
|
||||
2) **Choose a target file.** Prefer `TestProjects/UnityMCPTests/Assets/Scripts/LongUnityScriptClaudeTest.cs` if it exists; otherwise choose a simple, safe C# script under `TestProjects/UnityMCPTests/Assets/`.
|
||||
3) **Perform a small set of realistic edits** using minimal, precise changes (not full-file rewrites). Examples of small edits you may choose from (pick 3–6 total):
|
||||
- Insert a new, small helper method (e.g., a logger or counter) in a sensible location.
|
||||
- Add a short anchor comment near a key method (e.g., above `Update()`), then add or modify a few lines nearby.
|
||||
- Append an end‑of‑class utility method (e.g., formatting or clamping helper).
|
||||
- Make a safe, localized tweak to an existing method body (e.g., add a guard or a simple accumulator).
|
||||
- Optionally include one idempotency/no‑op check (re‑apply an edit and confirm nothing breaks).
|
||||
4) **Validate your edits.** Re‑read the modified regions and verify the changes exist, compile‑risk is low, and surrounding structure remains intact.
|
||||
5) **Report results.** Produce both:
|
||||
- A JUnit XML at `reports/junit-nl-suite.xml` containing a single suite named `UnityMCP.NL` with one test case per sub‑test you executed (mark pass/fail and include helpful failure text).
|
||||
- A summary markdown at `reports/junit-nl-suite.md` that explains what you attempted, what succeeded/failed, and any follow‑ups you would try.
|
||||
6) **Be gentle and reversible.** Prefer targeted, minimal edits; avoid wide refactors or non‑deterministic changes.
|
||||
|
||||
## Assumptions & Hints (non‑prescriptive)
|
||||
- A Unity‑oriented MCP server is expected to be connected. If a server‑provided **primer/capabilities** resource exists, read it first. If no primer is available, infer capabilities from your visible tools in the session.
|
||||
- In CI/headless mode, when calling `mcp__unity__list_resources` or `mcp__unity__read_resource`, include:
|
||||
- `ctx: {}`
|
||||
- `project_root: "TestProjects/UnityMCPTests"` (the server will also accept the absolute path passed via env)
|
||||
Example: `{ "ctx": {}, "under": "Assets/Scripts", "pattern": "*.cs", "project_root": "TestProjects/UnityMCPTests" }`
|
||||
- If the preferred file isn’t present, locate a fallback C# file with simple, local methods you can edit safely.
|
||||
- If a compile command is available in this environment, you may optionally trigger it; if not, rely on structural checks and localized validation.
|
||||
|
||||
## Output Requirements (match NL suite conventions)
|
||||
- JUnit XML at `$JUNIT_OUT` if set, otherwise `reports/junit-nl-suite.xml`.
|
||||
- Single suite named `UnityMCP.NL`, one `<testcase>` per sub‑test; include `<failure>` on errors.
|
||||
- Markdown at `$MD_OUT` if set, otherwise `reports/junit-nl-suite.md`.
|
||||
|
||||
Constraints (for fast publishing):
|
||||
- Log allowed tools once as a single line: `AllowedTools: ...`.
|
||||
- For every edit: Read → Write (with precondition hash) → Re‑read; on `{status:"stale_file"}` retry once after re‑read.
|
||||
- Keep evidence to ±20–40 lines windows; cap unified diffs to 300 lines and note truncation.
|
||||
- End `<system-out>` with `VERDICT: PASS` or `VERDICT: FAIL`.
|
||||
|
||||
## Guardrails
|
||||
- No destructive operations. Keep changes minimal and well‑scoped.
|
||||
- Don’t leak secrets or environment details beyond what’s needed in the reports.
|
||||
- Work without user interaction; do not prompt for approval mid‑flow.
|
||||
|
||||
> If capabilities discovery fails, still produce the two reports that clearly explain why you could not proceed and what evidence you gathered.
|
||||
|
|
@ -0,0 +1,234 @@
|
|||
# Unity NL/T Editing Suite — CI Agent Contract
|
||||
|
||||
You are running inside CI for the `unity-mcp` repo. Use only the tools allowed by the workflow. Work autonomously; do not prompt the user. Do NOT spawn subagents.
|
||||
|
||||
**Print this once, verbatim, early in the run:**
|
||||
AllowedTools: Write,Bash(printf:*),Bash(echo:*),Bash(scripts/nlt-revert.sh:*),mcp__unity__manage_editor,mcp__unity__list_resources,mcp__unity__read_resource,mcp__unity__apply_text_edits,mcp__unity__script_apply_edits,mcp__unity__validate_script,mcp__unity__find_in_file,mcp__unity__read_console,mcp__unity__get_sha
|
||||
|
||||
---
|
||||
|
||||
## Mission
|
||||
1) Pick target file (prefer):
|
||||
- `unity://path/Assets/Scripts/LongUnityScriptClaudeTest.cs`
|
||||
2) Execute **all** NL/T tests in order using minimal, precise edits.
|
||||
3) Validate each edit with `mcp__unity__validate_script(level:"standard")`.
|
||||
4) **Report**: write one `<testcase>` XML fragment per test to `reports/<TESTID>_results.xml`. Do **not** read or edit `$JUNIT_OUT`.
|
||||
5) **Restore** the file after each test using the OS‑level helper (fast), not a full‑file text write.
|
||||
|
||||
---
|
||||
|
||||
## Environment & Paths (CI)
|
||||
- Always pass: `project_root: "TestProjects/UnityMCPTests"` and `ctx: {}` on list/read/edit/validate.
|
||||
- **Canonical URIs only**:
|
||||
- Primary: `unity://path/Assets/...` (never embed `project_root` in the URI)
|
||||
- Relative (when supported): `Assets/...`
|
||||
- File paths for the helper script are workspace‑relative:
|
||||
- `TestProjects/UnityMCPTests/Assets/...`
|
||||
|
||||
CI provides:
|
||||
- `$JUNIT_OUT=reports/junit-nl-suite.xml` (pre‑created; leave alone)
|
||||
- `$MD_OUT=reports/junit-nl-suite.md` (synthesized from JUnit)
|
||||
- Helper script: `scripts/nlt-revert.sh` (snapshot/restore)
|
||||
|
||||
---
|
||||
|
||||
## Tool Mapping
|
||||
- **Anchors/regex/structured**: `mcp__unity__script_apply_edits`
|
||||
- Allowed ops: `anchor_insert`, `replace_range`, `regex_replace` (no overlapping ranges within a single call)
|
||||
- **Precise ranges / atomic batch**: `mcp__unity__apply_text_edits` (non‑overlapping ranges)
|
||||
- Multi‑span batches are computed from the same fresh read and sent atomically by default.
|
||||
- Prefer `options.applyMode:"atomic"` when passing options for multiple spans; for single‑span, sequential is fine.
|
||||
- **Hash-only**: `mcp__unity__get_sha` — returns `{sha256,lengthBytes,lastModifiedUtc}` without file body
|
||||
- **Validation**: `mcp__unity__validate_script(level:"standard")`
|
||||
- For edits, you may pass `options.validate`:
|
||||
- `standard` (default): full‑file delimiter balance checks.
|
||||
- `relaxed`: scoped checks for interior, non‑structural text edits; do not use for header/signature/brace‑touching changes.
|
||||
- **Reporting**: `Write` small XML fragments to `reports/*_results.xml`
|
||||
- **Editor state/flush**: `mcp__unity__manage_editor` (use sparingly; no project mutations)
|
||||
- **Console readback**: `mcp__unity__read_console` (INFO capture only; do not assert in place of `validate_script`)
|
||||
- **Snapshot/Restore**: `Bash(scripts/nlt-revert.sh:*)`
|
||||
- For `script_apply_edits`: use `name` + workspace‑relative `path` only (e.g., `name="LongUnityScriptClaudeTest"`, `path="Assets/Scripts"`). Do not pass `unity://...` URIs as `path`.
|
||||
- For `apply_text_edits` / `read_resource`: use the URI form only (e.g., `uri="unity://path/Assets/Scripts/LongUnityScriptClaudeTest.cs"`). Do not concatenate `Assets/` with a `unity://...` URI.
|
||||
- Never call generic Bash like `mkdir`; the revert helper creates needed directories. Use only `scripts/nlt-revert.sh` for snapshot/restore.
|
||||
- If you believe a directory is missing, you are mistaken: the workflow pre-creates it and the snapshot helper creates it if needed. Do not attempt any Bash other than scripts/nlt-revert.sh:*.
|
||||
|
||||
### Structured edit ops (required usage)
|
||||
|
||||
# Insert a helper RIGHT BEFORE the final class brace (NL‑3, T‑D)
|
||||
1) Prefer `script_apply_edits` with a regex capture on the final closing brace:
|
||||
```json
|
||||
{"op":"regex_replace",
|
||||
"pattern":"(?s)(\\r?\\n\\s*\\})\\s*$",
|
||||
"replacement":"\\n // Tail test A\\n // Tail test B\\n // Tail test C\\1"}
|
||||
|
||||
2) If the server returns `unsupported` (op not available) or `missing_field` (op‑specific), FALL BACK to
|
||||
`apply_text_edits`:
|
||||
- Find the last `}` in the file (class closing brace) by scanning from end.
|
||||
- Insert the three comment lines immediately before that index with one non‑overlapping range.
|
||||
|
||||
# Insert after GetCurrentTarget (T‑A/T‑E)
|
||||
- Use `script_apply_edits` with:
|
||||
```json
|
||||
{"op":"anchor_insert","afterMethodName":"GetCurrentTarget","text":"private int __TempHelper(int a,int b)=>a+b;\\n"}
|
||||
```
|
||||
|
||||
# Delete the temporary helper (T‑A/T‑E)
|
||||
- Prefer structured delete:
|
||||
- Use `script_apply_edits` with `{ "op":"delete_method", "className":"LongUnityScriptClaudeTest", "methodName":"PrintSeries" }` (or `__TempHelper` for T‑A).
|
||||
- If structured delete is unavailable, fall back to `apply_text_edits` with a single `replace_range` spanning the exact method block (bounds computed from a fresh read); avoid whole‑file regex deletes.
|
||||
|
||||
# T‑B (replace method body)
|
||||
- Use `mcp__unity__apply_text_edits` with a single `replace_range` strictly inside the `HasTarget` braces.
|
||||
- Compute start/end from a fresh `read_resource` at test start. Do not edit signature or header.
|
||||
- On `{status:"stale_file"}` retry once with the server-provided hash; if absent, re-read once and retry.
|
||||
- On `bad_request`: write the testcase with `<failure>…</failure>`, restore, and continue to next test.
|
||||
- On `missing_field`: FALL BACK per above; if the fallback also returns `unsupported` or `bad_request`, then fail as above.
|
||||
> Don’t use `mcp__unity__create_script`. Avoid the header/`using` region entirely.
|
||||
|
||||
Span formats for `apply_text_edits`:
|
||||
- Prefer LSP ranges (0‑based): `{ "range": { "start": {"line": L, "character": C}, "end": {…} }, "newText": "…" }`
|
||||
- Explicit fields are 1‑based: `{ "startLine": L1, "startCol": C1, "endLine": L2, "endCol": C2, "newText": "…" }`
|
||||
- SDK preflights overlap after normalization; overlapping non‑zero spans → `{status:"overlap"}` with conflicts and no file mutation.
|
||||
- Optional debug: pass `strict:true` to reject explicit 0‑based fields (else they are normalized and a warning is emitted).
|
||||
- Apply mode guidance: router defaults to atomic for multi‑span; you can explicitly set `options.applyMode` if needed.
|
||||
|
||||
---
|
||||
|
||||
## Output Rules (JUnit fragments only)
|
||||
- For each test, create **one** file: `reports/<TESTID>_results.xml` containing exactly a single `<testcase ...> ... </testcase>`.
|
||||
Put human-readable lines (PLAN/PROGRESS/evidence) **inside** `<system-out><![CDATA[ ... ]]></system-out>`.
|
||||
- If content contains `]]>`, split CDATA: replace `]]>` with `]]]]><![CDATA[>`.
|
||||
- Evidence windows only (±20–40 lines). If showing a unified diff, cap at 100 lines and note truncation.
|
||||
- **Never** open/patch `$JUNIT_OUT` or `$MD_OUT`; CI merges fragments and synthesizes Markdown.
|
||||
- Write destinations must match: `^reports/[A-Za-z0-9._-]+_results\.xml$`
|
||||
- Snapshot files must live under `reports/_snapshots/`
|
||||
- Reject absolute paths and any path containing `..`
|
||||
- Reject control characters and line breaks in filenames; enforce UTF‑8
|
||||
- Cap basename length to ≤64 chars; cap any path segment to ≤100 and total path length to ≤255
|
||||
- Bash(printf|echo) must write to stdout only. Do not use shell redirection, here‑docs, or `tee` to create/modify files. The only allowed FS mutation is via `scripts/nlt-revert.sh`.
|
||||
|
||||
**Example fragment**
|
||||
```xml
|
||||
<testcase classname="UnityMCP.NL-T" name="NL-1. Method replace/insert/delete">
|
||||
<system-out><![CDATA[
|
||||
PLAN: NL-0,NL-1,NL-2,NL-3,NL-4,T-A,T-B,T-C,T-D,T-E,T-F,T-G,T-H,T-I,T-J (len=15)
|
||||
PROGRESS: 2/15 completed
|
||||
pre_sha=<...>
|
||||
... evidence windows ...
|
||||
VERDICT: PASS
|
||||
]]></system-out>
|
||||
</testcase>
|
||||
|
||||
```
|
||||
|
||||
Note: Emit the PLAN line only in NL‑0 (do not repeat it for later tests).
|
||||
|
||||
|
||||
### Fast Restore Strategy (OS‑level)
|
||||
|
||||
- Snapshot once at NL‑0, then restore after each test via the helper.
|
||||
- Snapshot (once after confirming the target):
|
||||
```bash
|
||||
scripts/nlt-revert.sh snapshot "TestProjects/UnityMCPTests/Assets/Scripts/LongUnityScriptClaudeTest.cs" "reports/_snapshots/LongUnityScriptClaudeTest.cs.baseline"
|
||||
```
|
||||
- Log `snapshot_sha=...` printed by the script.
|
||||
- Restore (after each mutating test):
|
||||
```bash
|
||||
scripts/nlt-revert.sh restore "TestProjects/UnityMCPTests/Assets/Scripts/LongUnityScriptClaudeTest.cs" "reports/_snapshots/LongUnityScriptClaudeTest.cs.baseline"
|
||||
```
|
||||
- Then `read_resource` to confirm and (optionally) `validate_script(level:"standard")`.
|
||||
- If the helper fails: fall back once to a guarded full‑file restore using the baseline bytes; then continue.
|
||||
|
||||
### Guarded Write Pattern (for edits, not restores)
|
||||
|
||||
- Before any mutation: `res = mcp__unity__read_resource(uri)`; `pre_sha = sha256(res.bytes)`.
|
||||
- Write with `precondition_sha256 = pre_sha` on `apply_text_edits`/`script_apply_edits`.
|
||||
- To compute `pre_sha` without reading file contents, you may instead call `mcp__unity__get_sha(uri).sha256`.
|
||||
- On `{status:"stale_file"}`:
|
||||
- Retry once using the server-provided hash (e.g., `data.current_sha256` or `data.expected_sha256`, per API schema).
|
||||
- If absent, one re-read then a final retry. No loops.
|
||||
- After success: immediately re-read via `res2 = mcp__unity__read_resource(uri)` and set `pre_sha = sha256(res2.bytes)` before any further edits in the same test.
|
||||
- Prefer anchors (`script_apply_edits`) for end-of-class / above-method insertions. Keep edits inside method bodies. Avoid header/using.
|
||||
|
||||
**On non‑JSON/transport errors (timeout, EOF, connection closed):**
|
||||
- Write `reports/<TESTID>_results.xml` with a `<testcase>` that includes a `<failure>` or `<error>` node capturing the error text.
|
||||
- Run the OS restore via `scripts/nlt-revert.sh restore …`.
|
||||
- Continue to the next test (do not abort).
|
||||
|
||||
**If any write returns `bad_request`, or `unsupported` after a fallback attempt:**
|
||||
- Write `reports/<TESTID>_results.xml` with a `<testcase>` that includes a `<failure>` node capturing the server error, include evidence, and end with `VERDICT: FAIL`.
|
||||
- Run `scripts/nlt-revert.sh restore ...` and continue to the next test.
|
||||
### Execution Order (fixed)
|
||||
|
||||
- Run exactly: NL-0, NL-1, NL-2, NL-3, NL-4, T-A, T-B, T-C, T-D, T-E, T-F, T-G, T-H, T-I, T-J (15 total).
|
||||
- Before NL-1..T-J: Bash(scripts/nlt-revert.sh:restore "<target>" "reports/_snapshots/LongUnityScriptClaudeTest.cs.baseline") IF the baseline exists; skip for NL-0.
|
||||
- NL-0 must include the PLAN line (len=15).
|
||||
- After each testcase, include `PROGRESS: <k>/15 completed`.
|
||||
|
||||
|
||||
### Test Specs (concise)
|
||||
|
||||
- NL‑0. Sanity reads — Tail ~120; ±40 around `Update()`. Then snapshot via helper.
|
||||
- NL‑1. Replace/insert/delete — `HasTarget → return currentTarget != null;`; insert `PrintSeries()` after `GetCurrentTarget` logging "1,2,3"; verify; delete `PrintSeries()`; restore.
|
||||
- NL‑2. Anchor comment — Insert `// Build marker OK` above `public void Update(...)`; restore.
|
||||
- NL‑3. End‑of‑class — Insert `// Tail test A/B/C` (3 lines) before final brace; restore.
|
||||
- NL‑4. Compile trigger — Record INFO only.
|
||||
|
||||
### T‑A. Anchor insert (text path) — Insert helper after `GetCurrentTarget`; verify; delete via `regex_replace`; restore.
|
||||
### T‑B. Replace body — Single `replace_range` inside `HasTarget`; restore.
|
||||
- Options: pass {"validate":"relaxed"} for interior one-line edits.
|
||||
### T‑C. Header/region preservation — Edit interior of `ApplyBlend`; preserve signature/docs/regions; restore.
|
||||
- Options: pass {"validate":"relaxed"} for interior one-line edits.
|
||||
### T‑D. End‑of‑class (anchor) — Insert helper before final brace; remove; restore.
|
||||
### T‑E. Lifecycle — Insert → update → delete via regex; restore.
|
||||
### T‑F. Atomic batch — One `mcp__unity__apply_text_edits` call (text ranges only)
|
||||
- Compute all three edits from the **same fresh read**:
|
||||
1) Two small interior `replace_range` tweaks.
|
||||
2) One **end‑of‑class insertion**: find the **index of the final `}`** for the class; create a zero‑width range `[idx, idx)` and set `replacement` to the 3‑line comment block.
|
||||
- Send all three ranges in **one call**, sorted **descending by start index** to avoid offset drift.
|
||||
- Expect all‑or‑nothing semantics; on `{status:"overlap"}` or `{status:"bad_request"}`, write the testcase fragment with `<failure>…</failure>`, **restore**, and continue.
|
||||
- Options: pass {"applyMode":"atomic"} to enforce all‑or‑nothing.
|
||||
- T‑G. Path normalization — Make the same edit with `unity://path/Assets/...` then `Assets/...`. Without refreshing `precondition_sha256`, the second attempt returns `{stale_file}`; retry with the server-provided hash to confirm both forms resolve to the same file.
|
||||
|
||||
### T-H. Validation (standard)
|
||||
- Restore baseline (helper call above).
|
||||
- Perform a harmless interior tweak (or none), then MUST call:
|
||||
mcp__unity__validate_script(level:"standard")
|
||||
- Write the validator output to system-out; VERDICT: PASS if standard is clean, else include <failure> with the validator message and continue.
|
||||
|
||||
### T-I. Failure surfaces (expected)
|
||||
- Restore baseline.
|
||||
- (1) OVERLAP:
|
||||
* Fresh read of file; compute two interior ranges that overlap inside HasTarget.
|
||||
* Prefer LSP ranges (0‑based) or explicit 1‑based fields; ensure both spans come from the same snapshot.
|
||||
* Single mcp__unity__apply_text_edits call with both ranges.
|
||||
* Expect `{status:"overlap"}` (SDK preflight) → record as PASS; else FAIL. Restore.
|
||||
- (2) STALE_FILE:
|
||||
* Fresh read → pre_sha.
|
||||
* Make a tiny legit edit with pre_sha; success.
|
||||
* Attempt another edit reusing the OLD pre_sha.
|
||||
* Expect {status:"stale_file"} → record as PASS; else FAIL. Re-read to refresh, restore.
|
||||
|
||||
### Per‑test error handling and recovery
|
||||
- For each test (NL‑0..T‑J), use a try/finally pattern:
|
||||
- Always write a testcase fragment and perform restore in finally, even when tools return error payloads.
|
||||
- try: run the test steps; always write `reports/<ID>_results.xml` with PASS/FAIL/ERROR
|
||||
- finally: run Bash(scripts/nlt-revert.sh:restore …baseline) to restore the target file
|
||||
- On any transport/JSON/tool exception:
|
||||
- catch and write a `<testcase>` fragment with an `<error>` node (include the message), then proceed to the next test.
|
||||
- After NL‑4 completes, proceed directly to T‑A regardless of any earlier validator warnings (do not abort the run).
|
||||
- (3) USING_GUARD (optional):
|
||||
* Attempt a 1-line insert above the first 'using'.
|
||||
* Expect {status:"using_guard"} → record as PASS; else note 'not emitted'. Restore.
|
||||
|
||||
### T-J. Idempotency
|
||||
- Restore baseline.
|
||||
- Repeat a replace_range twice (second call may be noop). Validate standard after each.
|
||||
- Insert or ensure a tiny comment, then delete it twice (second delete may be noop).
|
||||
- Restore and PASS unless an error/structural break occurred.
|
||||
|
||||
|
||||
### Status & Reporting
|
||||
|
||||
- Safeguard statuses are non‑fatal; record and continue.
|
||||
- End each testcase `<system-out>` with `VERDICT: PASS` or `VERDICT: FAIL`.
|
||||
|
|
@ -0,0 +1,113 @@
|
|||
#!/usr/bin/env python3
|
||||
"""
|
||||
Post-processes a JUnit XML so that "expected"/environmental failures
|
||||
(e.g., permission prompts, empty MCP resources, or schema hiccups)
|
||||
are converted to <skipped/>. Leaves real failures intact.
|
||||
|
||||
Usage:
|
||||
python .github/scripts/mark_skipped.py reports/claude-nl-tests.xml
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
import sys
|
||||
import os
|
||||
import re
|
||||
import xml.etree.ElementTree as ET
|
||||
|
||||
PATTERNS = [
|
||||
r"\bpermission\b",
|
||||
r"\bpermissions\b",
|
||||
r"\bautoApprove\b",
|
||||
r"\bapproval\b",
|
||||
r"\bdenied\b",
|
||||
r"requested\s+permissions",
|
||||
r"^MCP resources list is empty$",
|
||||
r"No MCP resources detected",
|
||||
r"aggregator.*returned\s*\[\s*\]",
|
||||
r"Unknown resource:\s*unity://",
|
||||
r"Input should be a valid dictionary.*ctx",
|
||||
r"validation error .* ctx",
|
||||
]
|
||||
|
||||
def should_skip(msg: str) -> bool:
|
||||
if not msg:
|
||||
return False
|
||||
msg_l = msg.strip()
|
||||
for pat in PATTERNS:
|
||||
if re.search(pat, msg_l, flags=re.IGNORECASE | re.MULTILINE):
|
||||
return True
|
||||
return False
|
||||
|
||||
def summarize_counts(ts: ET.Element):
|
||||
tests = 0
|
||||
failures = 0
|
||||
errors = 0
|
||||
skipped = 0
|
||||
for case in ts.findall("testcase"):
|
||||
tests += 1
|
||||
if case.find("failure") is not None:
|
||||
failures += 1
|
||||
if case.find("error") is not None:
|
||||
errors += 1
|
||||
if case.find("skipped") is not None:
|
||||
skipped += 1
|
||||
return tests, failures, errors, skipped
|
||||
|
||||
def main(path: str) -> int:
|
||||
if not os.path.exists(path):
|
||||
print(f"[mark_skipped] No JUnit at {path}; nothing to do.")
|
||||
return 0
|
||||
|
||||
try:
|
||||
tree = ET.parse(path)
|
||||
except ET.ParseError as e:
|
||||
print(f"[mark_skipped] Could not parse {path}: {e}")
|
||||
return 0
|
||||
|
||||
root = tree.getroot()
|
||||
suites = root.findall("testsuite") if root.tag == "testsuites" else [root]
|
||||
|
||||
changed = False
|
||||
for ts in suites:
|
||||
for case in list(ts.findall("testcase")):
|
||||
nodes = [n for n in list(case) if n.tag in ("failure", "error")]
|
||||
if not nodes:
|
||||
continue
|
||||
# If any node matches skip patterns, convert the whole case to skipped.
|
||||
first_match_text = None
|
||||
to_skip = False
|
||||
for n in nodes:
|
||||
msg = (n.get("message") or "") + "\n" + (n.text or "")
|
||||
if should_skip(msg):
|
||||
first_match_text = (n.text or "").strip() or first_match_text
|
||||
to_skip = True
|
||||
if to_skip:
|
||||
for n in nodes:
|
||||
case.remove(n)
|
||||
reason = "Marked skipped: environment/permission precondition not met"
|
||||
skip = ET.SubElement(case, "skipped")
|
||||
skip.set("message", reason)
|
||||
skip.text = first_match_text or reason
|
||||
changed = True
|
||||
# Recompute tallies per testsuite
|
||||
tests, failures, errors, skipped = summarize_counts(ts)
|
||||
ts.set("tests", str(tests))
|
||||
ts.set("failures", str(failures))
|
||||
ts.set("errors", str(errors))
|
||||
ts.set("skipped", str(skipped))
|
||||
|
||||
if changed:
|
||||
tree.write(path, encoding="utf-8", xml_declaration=True)
|
||||
print(f"[mark_skipped] Updated {path}: converted environmental failures to skipped.")
|
||||
else:
|
||||
print(f"[mark_skipped] No environmental failures detected in {path}.")
|
||||
|
||||
return 0
|
||||
|
||||
if __name__ == "__main__":
|
||||
target = (
|
||||
sys.argv[1]
|
||||
if len(sys.argv) > 1
|
||||
else os.environ.get("JUNIT_OUT", "reports/junit-nl-suite.xml")
|
||||
)
|
||||
raise SystemExit(main(target))
|
||||
|
|
@ -0,0 +1,356 @@
|
|||
name: Claude Mini NL Test Suite (Unity live)
|
||||
|
||||
on:
|
||||
workflow_dispatch: {}
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
checks: write
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-${{ github.ref }}
|
||||
cancel-in-progress: true
|
||||
|
||||
env:
|
||||
UNITY_VERSION: 2021.3.45f1
|
||||
UNITY_IMAGE: unityci/editor:ubuntu-2021.3.45f1-linux-il2cpp-3
|
||||
UNITY_CACHE_ROOT: /home/runner/work/_temp/_github_home
|
||||
|
||||
jobs:
|
||||
nl-suite:
|
||||
if: github.event_name == 'workflow_dispatch'
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 60
|
||||
env:
|
||||
JUNIT_OUT: reports/junit-nl-suite.xml
|
||||
MD_OUT: reports/junit-nl-suite.md
|
||||
|
||||
steps:
|
||||
# ---------- Detect secrets ----------
|
||||
- name: Detect secrets (outputs)
|
||||
id: detect
|
||||
env:
|
||||
UNITY_LICENSE: ${{ secrets.UNITY_LICENSE }}
|
||||
UNITY_EMAIL: ${{ secrets.UNITY_EMAIL }}
|
||||
UNITY_PASSWORD: ${{ secrets.UNITY_PASSWORD }}
|
||||
UNITY_SERIAL: ${{ secrets.UNITY_SERIAL }}
|
||||
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||
run: |
|
||||
set -e
|
||||
if [ -n "$ANTHROPIC_API_KEY" ]; then echo "anthropic_ok=true" >> "$GITHUB_OUTPUT"; else echo "anthropic_ok=false" >> "$GITHUB_OUTPUT"; fi
|
||||
if [ -n "$UNITY_LICENSE" ] || { [ -n "$UNITY_EMAIL" ] && [ -n "$UNITY_PASSWORD" ]; } || [ -n "$UNITY_SERIAL" ]; then
|
||||
echo "unity_ok=true" >> "$GITHUB_OUTPUT"
|
||||
else
|
||||
echo "unity_ok=false" >> "$GITHUB_OUTPUT"
|
||||
fi
|
||||
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 0
|
||||
|
||||
# ---------- Python env for MCP server (uv) ----------
|
||||
- uses: astral-sh/setup-uv@v4
|
||||
with:
|
||||
python-version: '3.11'
|
||||
|
||||
- name: Install MCP server
|
||||
run: |
|
||||
set -eux
|
||||
uv venv
|
||||
echo "VIRTUAL_ENV=$GITHUB_WORKSPACE/.venv" >> "$GITHUB_ENV"
|
||||
echo "$GITHUB_WORKSPACE/.venv/bin" >> "$GITHUB_PATH"
|
||||
if [ -f UnityMcpBridge/UnityMcpServer~/src/pyproject.toml ]; then
|
||||
uv pip install -e UnityMcpBridge/UnityMcpServer~/src
|
||||
elif [ -f UnityMcpBridge/UnityMcpServer~/src/requirements.txt ]; then
|
||||
uv pip install -r UnityMcpBridge/UnityMcpServer~/src/requirements.txt
|
||||
elif [ -f UnityMcpBridge/UnityMcpServer~/pyproject.toml ]; then
|
||||
uv pip install -e UnityMcpBridge/UnityMcpServer~/
|
||||
elif [ -f UnityMcpBridge/UnityMcpServer~/requirements.txt ]; then
|
||||
uv pip install -r UnityMcpBridge/UnityMcpServer~/requirements.txt
|
||||
else
|
||||
echo "No MCP Python deps found (skipping)"
|
||||
fi
|
||||
|
||||
# ---------- License prime on host (handles ULF or EBL) ----------
|
||||
- name: Prime Unity license on host (GameCI)
|
||||
if: steps.detect.outputs.unity_ok == 'true'
|
||||
uses: game-ci/unity-test-runner@v4
|
||||
env:
|
||||
UNITY_LICENSE: ${{ secrets.UNITY_LICENSE }}
|
||||
UNITY_EMAIL: ${{ secrets.UNITY_EMAIL }}
|
||||
UNITY_PASSWORD: ${{ secrets.UNITY_PASSWORD }}
|
||||
UNITY_SERIAL: ${{ secrets.UNITY_SERIAL }}
|
||||
with:
|
||||
projectPath: TestProjects/UnityMCPTests
|
||||
testMode: EditMode
|
||||
customParameters: -runTests -testFilter __NoSuchTest__ -batchmode -nographics
|
||||
unityVersion: ${{ env.UNITY_VERSION }}
|
||||
|
||||
# (Optional) Show where the license actually got written
|
||||
- name: Inspect GameCI license caches (host)
|
||||
if: steps.detect.outputs.unity_ok == 'true'
|
||||
run: |
|
||||
set -eux
|
||||
find "${{ env.UNITY_CACHE_ROOT }}" -maxdepth 4 \( -path "*/.cache" -prune -o -type f \( -name '*.ulf' -o -name 'user.json' \) -print \) 2>/dev/null || true
|
||||
|
||||
# ---------- Clean any stale MCP status from previous runs ----------
|
||||
- name: Clean old MCP status
|
||||
run: |
|
||||
set -eux
|
||||
mkdir -p "$HOME/.unity-mcp"
|
||||
rm -f "$HOME/.unity-mcp"/unity-mcp-status-*.json || true
|
||||
|
||||
# ---------- Start headless Unity that stays up (bridge enabled) ----------
|
||||
- name: Start Unity (persistent bridge)
|
||||
if: steps.detect.outputs.unity_ok == 'true'
|
||||
env:
|
||||
UNITY_EMAIL: ${{ secrets.UNITY_EMAIL }}
|
||||
UNITY_PASSWORD: ${{ secrets.UNITY_PASSWORD }}
|
||||
UNITY_SERIAL: ${{ secrets.UNITY_SERIAL }}
|
||||
run: |
|
||||
set -eu
|
||||
if [ ! -d "${{ github.workspace }}/TestProjects/UnityMCPTests/ProjectSettings" ]; then
|
||||
echo "Unity project not found; failing fast."
|
||||
exit 1
|
||||
fi
|
||||
mkdir -p "$HOME/.unity-mcp"
|
||||
MANUAL_ARG=()
|
||||
if [ -f "${UNITY_CACHE_ROOT}/.local/share/unity3d/Unity_lic.ulf" ]; then
|
||||
MANUAL_ARG=(-manualLicenseFile /root/.local/share/unity3d/Unity_lic.ulf)
|
||||
fi
|
||||
EBL_ARGS=()
|
||||
[ -n "${UNITY_SERIAL:-}" ] && EBL_ARGS+=(-serial "$UNITY_SERIAL")
|
||||
[ -n "${UNITY_EMAIL:-}" ] && EBL_ARGS+=(-username "$UNITY_EMAIL")
|
||||
[ -n "${UNITY_PASSWORD:-}" ] && EBL_ARGS+=(-password "$UNITY_PASSWORD")
|
||||
docker rm -f unity-mcp >/dev/null 2>&1 || true
|
||||
docker run -d --name unity-mcp --network host \
|
||||
-e HOME=/root \
|
||||
-e UNITY_MCP_ALLOW_BATCH=1 -e UNITY_MCP_STATUS_DIR=/root/.unity-mcp \
|
||||
-e UNITY_MCP_BIND_HOST=127.0.0.1 \
|
||||
-v "${{ github.workspace }}:/workspace" -w /workspace \
|
||||
-v "${{ env.UNITY_CACHE_ROOT }}:/root" \
|
||||
-v "$HOME/.unity-mcp:/root/.unity-mcp" \
|
||||
${{ env.UNITY_IMAGE }} /opt/unity/Editor/Unity -batchmode -nographics -logFile - \
|
||||
-stackTraceLogType Full \
|
||||
-projectPath /workspace/TestProjects/UnityMCPTests \
|
||||
"${MANUAL_ARG[@]}" \
|
||||
"${EBL_ARGS[@]}" \
|
||||
-executeMethod MCPForUnity.Editor.MCPForUnityBridge.StartAutoConnect
|
||||
|
||||
# ---------- Wait for Unity bridge (fail fast if not running/ready) ----------
|
||||
- name: Wait for Unity bridge (robust)
|
||||
if: steps.detect.outputs.unity_ok == 'true'
|
||||
run: |
|
||||
set -euo pipefail
|
||||
if ! docker ps --format '{{.Names}}' | grep -qx 'unity-mcp'; then
|
||||
echo "Unity container failed to start"; docker ps -a || true; exit 1
|
||||
fi
|
||||
docker logs -f unity-mcp 2>&1 | sed -E 's/((serial|license|password|token)[^[:space:]]*)/[REDACTED]/ig' & LOGPID=$!
|
||||
deadline=$((SECONDS+420)); READY=0
|
||||
try_connect_host() {
|
||||
P="$1"
|
||||
timeout 1 bash -lc "exec 3<>/dev/tcp/127.0.0.1/$P; head -c 8 <&3 >/dev/null" && return 0 || true
|
||||
if command -v nc >/dev/null 2>&1; then nc -6 -z ::1 "$P" && return 0 || true; fi
|
||||
return 1
|
||||
}
|
||||
|
||||
# in-container probe will try IPv4 then IPv6 via nc or /dev/tcp
|
||||
|
||||
while [ $SECONDS -lt $deadline ]; do
|
||||
if docker logs unity-mcp 2>&1 | grep -qE "MCP Bridge listening|Bridge ready|Server started"; then
|
||||
READY=1; echo "Bridge ready (log markers)"; break
|
||||
fi
|
||||
PORT=$(python -c "import os,glob,json,sys,time; b=os.path.expanduser('~/.unity-mcp'); fs=sorted(glob.glob(os.path.join(b,'unity-mcp-status-*.json')), key=os.path.getmtime, reverse=True); print(next((json.load(open(f,'r',encoding='utf-8')).get('unity_port') for f in fs if time.time()-os.path.getmtime(f)<=300 and json.load(open(f,'r',encoding='utf-8')).get('unity_port')), '' ))" 2>/dev/null || true)
|
||||
if [ -n "${PORT:-}" ] && { try_connect_host "$PORT" || docker exec unity-mcp bash -lc "timeout 1 bash -lc 'exec 3<>/dev/tcp/127.0.0.1/$PORT' || (command -v nc >/dev/null 2>&1 && nc -6 -z ::1 $PORT)"; }; then
|
||||
READY=1; echo "Bridge ready on port $PORT"; break
|
||||
fi
|
||||
if docker logs unity-mcp 2>&1 | grep -qE "No valid Unity Editor license|Token not found in cache|com\.unity\.editor\.headless"; then
|
||||
echo "Licensing error detected"; break
|
||||
fi
|
||||
sleep 2
|
||||
done
|
||||
|
||||
kill $LOGPID || true
|
||||
|
||||
if [ "$READY" != "1" ]; then
|
||||
echo "Bridge not ready; diagnostics:"
|
||||
echo "== status files =="; ls -la "$HOME/.unity-mcp" || true
|
||||
echo "== status contents =="; for f in "$HOME"/.unity-mcp/unity-mcp-status-*.json; do [ -f "$f" ] && { echo "--- $f"; sed -n '1,120p' "$f"; }; done
|
||||
echo "== sockets (inside container) =="; docker exec unity-mcp bash -lc 'ss -lntp || netstat -tulpen || true'
|
||||
echo "== tail of Unity log =="
|
||||
docker logs --tail 200 unity-mcp | sed -E 's/((serial|license|password|token)[^[:space:]]*)/[REDACTED]/ig' || true
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# ---------- Make MCP config available to the action ----------
|
||||
- name: Write MCP config (.claude/mcp.json)
|
||||
run: |
|
||||
set -eux
|
||||
mkdir -p .claude
|
||||
cat > .claude/mcp.json <<JSON
|
||||
{
|
||||
"mcpServers": {
|
||||
"unity": {
|
||||
"command": "uv",
|
||||
"args": ["run","--active","--directory","UnityMcpBridge/UnityMcpServer~/src","python","server.py"],
|
||||
"transport": { "type": "stdio" },
|
||||
"env": {
|
||||
"PYTHONUNBUFFERED": "1",
|
||||
"MCP_LOG_LEVEL": "debug",
|
||||
"UNITY_PROJECT_ROOT": "$GITHUB_WORKSPACE/TestProjects/UnityMCPTests"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
JSON
|
||||
|
||||
# ---------- Ensure reports dir exists ----------
|
||||
- name: Prepare reports
|
||||
run: |
|
||||
set -eux
|
||||
mkdir -p reports
|
||||
|
||||
# ---------- Run full NL suite once ----------
|
||||
- name: Run Claude NL suite (single pass)
|
||||
uses: anthropics/claude-code-base-action@beta
|
||||
if: steps.detect.outputs.anthropic_ok == 'true' && steps.detect.outputs.unity_ok == 'true'
|
||||
env:
|
||||
JUNIT_OUT: reports/junit-nl-suite.xml
|
||||
MD_OUT: reports/junit-nl-suite.md
|
||||
with:
|
||||
use_node_cache: false
|
||||
prompt_file: .claude/prompts/nl-unity-claude-tests-mini.md
|
||||
mcp_config: .claude/mcp.json
|
||||
allowed_tools: "Write,mcp__unity__manage_editor,mcp__unity__list_resources,mcp__unity__read_resource,mcp__unity__apply_text_edits,mcp__unity__script_apply_edits,mcp__unity__validate_script,mcp__unity__find_in_file, mcp__unity__read_console"
|
||||
disallowed_tools: "TodoWrite,Task"
|
||||
model: "claude-3-7-sonnet-latest"
|
||||
timeout_minutes: "30"
|
||||
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||
|
||||
- name: Normalize JUnit for consumer actions (strong)
|
||||
if: always()
|
||||
shell: bash
|
||||
run: |
|
||||
python3 - <<'PY'
|
||||
from pathlib import Path
|
||||
import xml.etree.ElementTree as ET
|
||||
import os
|
||||
|
||||
def localname(tag: str) -> str:
|
||||
return tag.rsplit('}', 1)[-1] if '}' in tag else tag
|
||||
|
||||
src = Path(os.environ.get('JUNIT_OUT', 'reports/junit-nl-suite.xml'))
|
||||
out = Path('reports/junit-for-actions.xml')
|
||||
out.parent.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
if not src.exists():
|
||||
# Try to use any existing XML as a source (e.g., claude-nl-tests.xml)
|
||||
candidates = sorted(Path('reports').glob('*.xml'))
|
||||
if candidates:
|
||||
src = candidates[0]
|
||||
else:
|
||||
print("WARN: no XML source found for normalization")
|
||||
|
||||
if src.exists():
|
||||
try:
|
||||
root = ET.parse(src).getroot()
|
||||
rtag = localname(root.tag)
|
||||
if rtag == 'testsuites' and len(root) == 1 and localname(root[0].tag) == 'testsuite':
|
||||
ET.ElementTree(root[0]).write(out, encoding='utf-8', xml_declaration=True)
|
||||
else:
|
||||
out.write_bytes(src.read_bytes())
|
||||
except Exception as e:
|
||||
print("Normalization error:", e)
|
||||
out.write_bytes(src.read_bytes())
|
||||
|
||||
# Always create a second copy with a junit-* name so wildcard patterns match too
|
||||
if out.exists():
|
||||
Path('reports/junit-nl-suite-copy.xml').write_bytes(out.read_bytes())
|
||||
PY
|
||||
|
||||
- name: "Debug: list report files"
|
||||
if: always()
|
||||
shell: bash
|
||||
run: |
|
||||
set -eux
|
||||
ls -la reports || true
|
||||
shopt -s nullglob
|
||||
for f in reports/*.xml; do
|
||||
echo "===== $f ====="
|
||||
head -n 40 "$f" || true
|
||||
done
|
||||
|
||||
|
||||
# sanitize only the markdown (does not touch JUnit xml)
|
||||
- name: Sanitize markdown (all shards)
|
||||
if: always()
|
||||
run: |
|
||||
set -eu
|
||||
python - <<'PY'
|
||||
from pathlib import Path
|
||||
rp=Path('reports')
|
||||
rp.mkdir(parents=True, exist_ok=True)
|
||||
for p in rp.glob('*.md'):
|
||||
b=p.read_bytes().replace(b'\x00', b'')
|
||||
s=b.decode('utf-8','replace').replace('\r\n','\n')
|
||||
p.write_text(s, encoding='utf-8', newline='\n')
|
||||
PY
|
||||
|
||||
- name: NL/T details → Job Summary
|
||||
if: always()
|
||||
run: |
|
||||
echo "## Unity NL/T Editing Suite — Full Coverage" >> $GITHUB_STEP_SUMMARY
|
||||
python - <<'PY' >> $GITHUB_STEP_SUMMARY
|
||||
from pathlib import Path
|
||||
p = Path('reports/junit-nl-suite.md') if Path('reports/junit-nl-suite.md').exists() else Path('reports/claude-nl-tests.md')
|
||||
if p.exists():
|
||||
text = p.read_bytes().decode('utf-8', 'replace')
|
||||
MAX = 65000
|
||||
print(text[:MAX])
|
||||
if len(text) > MAX:
|
||||
print("\n\n_…truncated in summary; full report is in artifacts._")
|
||||
else:
|
||||
print("_No markdown report found._")
|
||||
PY
|
||||
|
||||
- name: Fallback JUnit if missing
|
||||
if: always()
|
||||
run: |
|
||||
set -eu
|
||||
mkdir -p reports
|
||||
if [ ! -f reports/junit-for-actions.xml ]; then
|
||||
printf '%s\n' \
|
||||
'<?xml version="1.0" encoding="UTF-8"?>' \
|
||||
'<testsuite name="UnityMCP.NL-T" tests="1" failures="1" time="0">' \
|
||||
' <testcase classname="UnityMCP.NL-T" name="NL-Suite.Execution" time="0.0">' \
|
||||
' <failure><![CDATA[No JUnit was produced by the NL suite step. See the '"'"'Run Claude NL suite (single pass)'"'"' logs.]]></failure>' \
|
||||
' </testcase>' \
|
||||
'</testsuite>' \
|
||||
> reports/junit-for-actions.xml
|
||||
fi
|
||||
|
||||
|
||||
- name: Publish JUnit reports
|
||||
if: always()
|
||||
uses: mikepenz/action-junit-report@v5
|
||||
with:
|
||||
report_paths: 'reports/junit-for-actions.xml'
|
||||
include_passed: true
|
||||
detailed_summary: true
|
||||
annotate_notice: true
|
||||
require_tests: false
|
||||
fail_on_parse_error: true
|
||||
|
||||
- name: Upload artifacts
|
||||
if: always()
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: claude-nl-suite-artifacts
|
||||
path: reports/**
|
||||
|
||||
# ---------- Always stop Unity ----------
|
||||
- name: Stop Unity
|
||||
if: always()
|
||||
run: |
|
||||
docker logs --tail 400 unity-mcp | sed -E 's/((serial|license|password|token)[^[:space:]]*)/[REDACTED]/ig' || true
|
||||
docker rm -f unity-mcp || true
|
||||
|
|
@ -0,0 +1,543 @@
|
|||
name: Claude NL/T Full Suite (Unity live)
|
||||
|
||||
on:
|
||||
workflow_dispatch: {}
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
checks: write
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-${{ github.ref }}
|
||||
cancel-in-progress: true
|
||||
|
||||
env:
|
||||
UNITY_VERSION: 2021.3.45f1
|
||||
UNITY_IMAGE: unityci/editor:ubuntu-2021.3.45f1-linux-il2cpp-3
|
||||
UNITY_CACHE_ROOT: /home/runner/work/_temp/_github_home
|
||||
|
||||
jobs:
|
||||
nl-suite:
|
||||
if: github.event_name == 'workflow_dispatch'
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 60
|
||||
env:
|
||||
JUNIT_OUT: reports/junit-nl-suite.xml
|
||||
MD_OUT: reports/junit-nl-suite.md
|
||||
|
||||
steps:
|
||||
# ---------- Secrets check ----------
|
||||
- name: Detect secrets (outputs)
|
||||
id: detect
|
||||
env:
|
||||
UNITY_LICENSE: ${{ secrets.UNITY_LICENSE }}
|
||||
UNITY_EMAIL: ${{ secrets.UNITY_EMAIL }}
|
||||
UNITY_PASSWORD: ${{ secrets.UNITY_PASSWORD }}
|
||||
UNITY_SERIAL: ${{ secrets.UNITY_SERIAL }}
|
||||
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||
run: |
|
||||
set -e
|
||||
if [ -n "$ANTHROPIC_API_KEY" ]; then echo "anthropic_ok=true" >> "$GITHUB_OUTPUT"; else echo "anthropic_ok=false" >> "$GITHUB_OUTPUT"; fi
|
||||
if [ -n "$UNITY_LICENSE" ] || { [ -n "$UNITY_EMAIL" ] && [ -n "$UNITY_PASSWORD" ]; } || [ -n "$UNITY_SERIAL" ]; then
|
||||
echo "unity_ok=true" >> "$GITHUB_OUTPUT"
|
||||
else
|
||||
echo "unity_ok=false" >> "$GITHUB_OUTPUT"
|
||||
fi
|
||||
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 0
|
||||
|
||||
# ---------- Python env for MCP server (uv) ----------
|
||||
- uses: astral-sh/setup-uv@v4
|
||||
with:
|
||||
python-version: '3.11'
|
||||
|
||||
- name: Install MCP server
|
||||
run: |
|
||||
set -eux
|
||||
uv venv
|
||||
echo "VIRTUAL_ENV=$GITHUB_WORKSPACE/.venv" >> "$GITHUB_ENV"
|
||||
echo "$GITHUB_WORKSPACE/.venv/bin" >> "$GITHUB_PATH"
|
||||
if [ -f UnityMcpBridge/UnityMcpServer~/src/pyproject.toml ]; then
|
||||
uv pip install -e UnityMcpBridge/UnityMcpServer~/src
|
||||
elif [ -f UnityMcpBridge/UnityMcpServer~/src/requirements.txt ]; then
|
||||
uv pip install -r UnityMcpBridge/UnityMcpServer~/src/requirements.txt
|
||||
elif [ -f UnityMcpBridge/UnityMcpServer~/pyproject.toml ]; then
|
||||
uv pip install -e UnityMcpBridge/UnityMcpServer~/
|
||||
elif [ -f UnityMcpBridge/UnityMcpServer~/requirements.txt ]; then
|
||||
uv pip install -r UnityMcpBridge/UnityMcpServer~/requirements.txt
|
||||
else
|
||||
echo "No MCP Python deps found (skipping)"
|
||||
fi
|
||||
|
||||
# ---------- License prime on host (GameCI) ----------
|
||||
- name: Prime Unity license on host (GameCI)
|
||||
if: steps.detect.outputs.unity_ok == 'true'
|
||||
uses: game-ci/unity-test-runner@v4
|
||||
env:
|
||||
UNITY_LICENSE: ${{ secrets.UNITY_LICENSE }}
|
||||
UNITY_EMAIL: ${{ secrets.UNITY_EMAIL }}
|
||||
UNITY_PASSWORD: ${{ secrets.UNITY_PASSWORD }}
|
||||
UNITY_SERIAL: ${{ secrets.UNITY_SERIAL }}
|
||||
with:
|
||||
projectPath: TestProjects/UnityMCPTests
|
||||
testMode: EditMode
|
||||
customParameters: -runTests -testFilter __NoSuchTest__ -batchmode -nographics
|
||||
unityVersion: ${{ env.UNITY_VERSION }}
|
||||
|
||||
# (Optional) Inspect license caches
|
||||
- name: Inspect GameCI license caches (host)
|
||||
if: steps.detect.outputs.unity_ok == 'true'
|
||||
run: |
|
||||
set -eux
|
||||
find "${{ env.UNITY_CACHE_ROOT }}" -maxdepth 4 \( -path "*/.cache" -prune -o -type f \( -name '*.ulf' -o -name 'user.json' \) -print \) 2>/dev/null || true
|
||||
|
||||
# ---------- Clean old MCP status ----------
|
||||
- name: Clean old MCP status
|
||||
run: |
|
||||
set -eux
|
||||
mkdir -p "$HOME/.unity-mcp"
|
||||
rm -f "$HOME/.unity-mcp"/unity-mcp-status-*.json || true
|
||||
|
||||
# ---------- Start headless Unity (persistent bridge) ----------
|
||||
- name: Start Unity (persistent bridge)
|
||||
if: steps.detect.outputs.unity_ok == 'true'
|
||||
env:
|
||||
UNITY_EMAIL: ${{ secrets.UNITY_EMAIL }}
|
||||
UNITY_PASSWORD: ${{ secrets.UNITY_PASSWORD }}
|
||||
UNITY_SERIAL: ${{ secrets.UNITY_SERIAL }}
|
||||
run: |
|
||||
set -eu
|
||||
if [ ! -d "${{ github.workspace }}/TestProjects/UnityMCPTests/ProjectSettings" ]; then
|
||||
echo "Unity project not found; failing fast."
|
||||
exit 1
|
||||
fi
|
||||
mkdir -p "$HOME/.unity-mcp"
|
||||
MANUAL_ARG=()
|
||||
if [ -f "${UNITY_CACHE_ROOT}/.local/share/unity3d/Unity_lic.ulf" ]; then
|
||||
MANUAL_ARG=(-manualLicenseFile /root/.local/share/unity3d/Unity_lic.ulf)
|
||||
fi
|
||||
EBL_ARGS=()
|
||||
[ -n "${UNITY_SERIAL:-}" ] && EBL_ARGS+=(-serial "$UNITY_SERIAL")
|
||||
[ -n "${UNITY_EMAIL:-}" ] && EBL_ARGS+=(-username "$UNITY_EMAIL")
|
||||
[ -n "${UNITY_PASSWORD:-}" ] && EBL_ARGS+=(-password "$UNITY_PASSWORD")
|
||||
docker rm -f unity-mcp >/dev/null 2>&1 || true
|
||||
docker run -d --name unity-mcp --network host \
|
||||
-e HOME=/root \
|
||||
-e UNITY_MCP_ALLOW_BATCH=1 -e UNITY_MCP_STATUS_DIR=/root/.unity-mcp \
|
||||
-e UNITY_MCP_BIND_HOST=127.0.0.1 \
|
||||
-v "${{ github.workspace }}:/workspace" -w /workspace \
|
||||
-v "${{ env.UNITY_CACHE_ROOT }}:/root" \
|
||||
-v "$HOME/.unity-mcp:/root/.unity-mcp" \
|
||||
${{ env.UNITY_IMAGE }} /opt/unity/Editor/Unity -batchmode -nographics -logFile - \
|
||||
-stackTraceLogType Full \
|
||||
-projectPath /workspace/TestProjects/UnityMCPTests \
|
||||
"${MANUAL_ARG[@]}" \
|
||||
"${EBL_ARGS[@]}" \
|
||||
-executeMethod MCPForUnity.Editor.MCPForUnityBridge.StartAutoConnect
|
||||
|
||||
# ---------- Wait for Unity bridge ----------
|
||||
- name: Wait for Unity bridge (robust)
|
||||
if: steps.detect.outputs.unity_ok == 'true'
|
||||
run: |
|
||||
set -euo pipefail
|
||||
if ! docker ps --format '{{.Names}}' | grep -qx 'unity-mcp'; then
|
||||
echo "Unity container failed to start"; docker ps -a || true; exit 1
|
||||
fi
|
||||
docker logs -f unity-mcp 2>&1 | sed -E 's/((serial|license|password|token)[^[:space:]]*)/[REDACTED]/ig' & LOGPID=$!
|
||||
deadline=$((SECONDS+420)); READY=0
|
||||
try_connect_host() {
|
||||
P="$1"
|
||||
timeout 1 bash -lc "exec 3<>/dev/tcp/127.0.0.1/$P; head -c 8 <&3 >/dev/null" && return 0 || true
|
||||
if command -v nc >/dev/null 2>&1; then nc -6 -z ::1 "$P" && return 0 || true; fi
|
||||
return 1
|
||||
}
|
||||
while [ $SECONDS -lt $deadline ]; do
|
||||
if docker logs unity-mcp 2>&1 | grep -qE "MCP Bridge listening|Bridge ready|Server started"; then
|
||||
READY=1; echo "Bridge ready (log markers)"; break
|
||||
fi
|
||||
PORT=$(python3 -c "import os,glob,json,sys,time; b=os.path.expanduser('~/.unity-mcp'); fs=sorted(glob.glob(os.path.join(b,'unity-mcp-status-*.json')), key=os.path.getmtime, reverse=True); print(next((json.load(open(f,'r',encoding='utf-8')).get('unity_port') for f in fs if time.time()-os.path.getmtime(f)<=300 and json.load(open(f,'r',encoding='utf-8')).get('unity_port')), '' ))" 2>/dev/null || true)
|
||||
if [ -n "${PORT:-}" ] && { try_connect_host "$PORT" || docker exec unity-mcp bash -lc "timeout 1 bash -lc 'exec 3<>/dev/tcp/127.0.0.1/$PORT' || (command -v nc >/dev/null 2>&1 && nc -6 -z ::1 $PORT)"; }; then
|
||||
READY=1; echo "Bridge ready on port $PORT"; break
|
||||
fi
|
||||
if docker logs unity-mcp 2>&1 | grep -qE "No valid Unity Editor license|Token not found in cache|com\.unity\.editor\.headless"; then
|
||||
echo "Licensing error detected"; break
|
||||
fi
|
||||
sleep 2
|
||||
done
|
||||
kill $LOGPID || true
|
||||
if [ "$READY" != "1" ]; then
|
||||
echo "Bridge not ready; diagnostics:"
|
||||
echo "== status files =="; ls -la "$HOME/.unity-mcp" || true
|
||||
echo "== status contents =="; for f in "$HOME"/.unity-mcp/unity-mcp-status-*.json; do [ -f "$f" ] && { echo "--- $f"; sed -n '1,120p' "$f"; }; done
|
||||
echo "== sockets (inside container) =="; docker exec unity-mcp bash -lc 'ss -lntp || netstat -tulpen || true'
|
||||
echo "== tail of Unity log =="
|
||||
docker logs --tail 200 unity-mcp | sed -E 's/((serial|license|password|token)[^[:space:]]*)/[REDACTED]/ig' || true
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# ---------- MCP client config ----------
|
||||
- name: Write MCP config (.claude/mcp.json)
|
||||
run: |
|
||||
set -eux
|
||||
mkdir -p .claude
|
||||
cat > .claude/mcp.json <<JSON
|
||||
{
|
||||
"mcpServers": {
|
||||
"unity": {
|
||||
"command": "uv",
|
||||
"args": ["run","--active","--directory","UnityMcpBridge/UnityMcpServer~/src","python","server.py"],
|
||||
"transport": { "type": "stdio" },
|
||||
"env": {
|
||||
"PYTHONUNBUFFERED": "1",
|
||||
"MCP_LOG_LEVEL": "debug",
|
||||
"UNITY_PROJECT_ROOT": "$GITHUB_WORKSPACE/TestProjects/UnityMCPTests"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
JSON
|
||||
|
||||
# ---------- Reports & helper ----------
|
||||
- name: Prepare reports and dirs
|
||||
run: |
|
||||
set -eux
|
||||
rm -f reports/*.xml reports/*.md || true
|
||||
mkdir -p reports reports/_snapshots scripts
|
||||
|
||||
- name: Create report skeletons
|
||||
run: |
|
||||
set -eu
|
||||
cat > "$JUNIT_OUT" <<'XML'
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<testsuites><testsuite name="UnityMCP.NL-T" tests="1" failures="1" errors="0" skipped="0" time="0">
|
||||
<testcase name="NL-Suite.Bootstrap" classname="UnityMCP.NL-T">
|
||||
<failure message="bootstrap">Bootstrap placeholder; suite will append real tests.</failure>
|
||||
</testcase>
|
||||
</testsuite></testsuites>
|
||||
XML
|
||||
printf '# Unity NL/T Editing Suite Test Results\n\n' > "$MD_OUT"
|
||||
|
||||
- name: Write safe revert helper (scripts/nlt-revert.sh)
|
||||
shell: bash
|
||||
run: |
|
||||
set -eux
|
||||
cat > scripts/nlt-revert.sh <<'BASH'
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
sub="${1:-}"; target_rel="${2:-}"; snap="${3:-}"
|
||||
WS="${GITHUB_WORKSPACE:-$PWD}"
|
||||
ROOT="$WS/TestProjects/UnityMCPTests"
|
||||
t_abs="$(realpath -m "$WS/$target_rel")"
|
||||
s_abs="$(realpath -m "$WS/$snap")"
|
||||
if [[ "$t_abs" != "$ROOT/Assets/"* ]]; then
|
||||
echo "refuse: target outside allowed scope: $t_abs" >&2; exit 2
|
||||
fi
|
||||
mkdir -p "$(dirname "$s_abs")"
|
||||
case "$sub" in
|
||||
snapshot)
|
||||
cp -f "$t_abs" "$s_abs"
|
||||
sha=$(sha256sum "$s_abs" | awk '{print $1}')
|
||||
echo "snapshot_sha=$sha"
|
||||
;;
|
||||
restore)
|
||||
if [[ ! -f "$s_abs" ]]; then echo "snapshot missing: $s_abs" >&2; exit 3; fi
|
||||
cp -f "$s_abs" "$t_abs"
|
||||
touch "$t_abs"
|
||||
sha=$(sha256sum "$t_abs" | awk '{print $1}')
|
||||
echo "restored_sha=$sha"
|
||||
;;
|
||||
*)
|
||||
echo "usage: $0 snapshot|restore <target_rel_path> <snapshot_path>" >&2; exit 1
|
||||
;;
|
||||
esac
|
||||
BASH
|
||||
chmod +x scripts/nlt-revert.sh
|
||||
|
||||
# ---------- Snapshot baseline (pre-agent) ----------
|
||||
- name: Snapshot baseline (pre-agent)
|
||||
if: steps.detect.outputs.anthropic_ok == 'true' && steps.detect.outputs.unity_ok == 'true'
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
TARGET="TestProjects/UnityMCPTests/Assets/Scripts/LongUnityScriptClaudeTest.cs"
|
||||
SNAP="reports/_snapshots/LongUnityScriptClaudeTest.cs.baseline"
|
||||
scripts/nlt-revert.sh snapshot "$TARGET" "$SNAP"
|
||||
|
||||
|
||||
# ---------- Run suite ----------
|
||||
- name: Run Claude NL suite (single pass)
|
||||
uses: anthropics/claude-code-base-action@beta
|
||||
if: steps.detect.outputs.anthropic_ok == 'true' && steps.detect.outputs.unity_ok == 'true'
|
||||
continue-on-error: true
|
||||
with:
|
||||
use_node_cache: false
|
||||
prompt_file: .claude/prompts/nl-unity-suite-full.md
|
||||
mcp_config: .claude/mcp.json
|
||||
allowed_tools: >-
|
||||
Write,
|
||||
Bash(scripts/nlt-revert.sh:*),
|
||||
mcp__unity__manage_editor,
|
||||
mcp__unity__list_resources,
|
||||
mcp__unity__read_resource,
|
||||
mcp__unity__apply_text_edits,
|
||||
mcp__unity__script_apply_edits,
|
||||
mcp__unity__validate_script,
|
||||
mcp__unity__find_in_file,
|
||||
mcp__unity__read_console,
|
||||
mcp__unity__get_sha
|
||||
disallowed_tools: TodoWrite,Task
|
||||
model: claude-3-7-sonnet-latest
|
||||
timeout_minutes: "30"
|
||||
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||
|
||||
# ---------- Merge testcase fragments into JUnit ----------
|
||||
- name: Normalize/assemble JUnit in-place (single file)
|
||||
if: always()
|
||||
shell: bash
|
||||
run: |
|
||||
python3 - <<'PY'
|
||||
from pathlib import Path
|
||||
import xml.etree.ElementTree as ET
|
||||
import re, os
|
||||
def localname(tag: str) -> str: return tag.rsplit('}', 1)[-1] if '}' in tag else tag
|
||||
src = Path(os.environ.get('JUNIT_OUT', 'reports/junit-nl-suite.xml'))
|
||||
if not src.exists(): raise SystemExit(0)
|
||||
tree = ET.parse(src); root = tree.getroot()
|
||||
suite = root.find('./*') if localname(root.tag) == 'testsuites' else root
|
||||
if suite is None: raise SystemExit(0)
|
||||
fragments = sorted(Path('reports').glob('*_results.xml'))
|
||||
added = 0
|
||||
for frag in fragments:
|
||||
try:
|
||||
froot = ET.parse(frag).getroot()
|
||||
if localname(froot.tag) == 'testcase':
|
||||
suite.append(froot); added += 1
|
||||
else:
|
||||
for tc in froot.findall('.//testcase'):
|
||||
suite.append(tc); added += 1
|
||||
except Exception:
|
||||
txt = Path(frag).read_text(encoding='utf-8', errors='replace')
|
||||
for m in re.findall(r'<testcase[\\s\\S]*?</testcase>', txt, flags=re.DOTALL):
|
||||
try: suite.append(ET.fromstring(m)); added += 1
|
||||
except Exception: pass
|
||||
if added:
|
||||
# Drop bootstrap placeholder and recompute counts
|
||||
removed_bootstrap = 0
|
||||
for tc in list(suite.findall('.//testcase')):
|
||||
name = (tc.get('name') or '')
|
||||
if name == 'NL-Suite.Bootstrap':
|
||||
suite.remove(tc)
|
||||
removed_bootstrap += 1
|
||||
testcases = suite.findall('.//testcase')
|
||||
tests_cnt = len(testcases)
|
||||
failures_cnt = sum(1 for tc in testcases if (tc.find('failure') is not None or tc.find('error') is not None))
|
||||
suite.set('tests', str(tests_cnt))
|
||||
suite.set('failures', str(failures_cnt))
|
||||
suite.set('errors', str(0))
|
||||
suite.set('skipped', str(0))
|
||||
tree.write(src, encoding='utf-8', xml_declaration=True)
|
||||
print(f"Added {added} testcase fragments; removed bootstrap={removed_bootstrap}; tests={tests_cnt}; failures={failures_cnt}")
|
||||
PY
|
||||
|
||||
# ---------- Markdown summary from JUnit ----------
|
||||
- name: Build markdown summary from JUnit
|
||||
if: always()
|
||||
shell: bash
|
||||
run: |
|
||||
python3 - <<'PY'
|
||||
import xml.etree.ElementTree as ET
|
||||
from pathlib import Path
|
||||
import os, html
|
||||
|
||||
def localname(tag: str) -> str:
|
||||
return tag.rsplit('}', 1)[-1] if '}' in tag else tag
|
||||
|
||||
src = Path(os.environ.get('JUNIT_OUT', 'reports/junit-nl-suite.xml'))
|
||||
md_out = Path(os.environ.get('MD_OUT', 'reports/junit-nl-suite.md'))
|
||||
# Ensure destination directory exists even if earlier prep steps were skipped
|
||||
md_out.parent.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
if not src.exists():
|
||||
md_out.write_text("# Unity NL/T Editing Suite Test Results\n\n(No JUnit found)\n", encoding='utf-8')
|
||||
raise SystemExit(0)
|
||||
|
||||
tree = ET.parse(src)
|
||||
root = tree.getroot()
|
||||
suite = root.find('./*') if localname(root.tag) == 'testsuites' else root
|
||||
cases = [] if suite is None else list(suite.findall('.//testcase'))
|
||||
|
||||
total = len(cases)
|
||||
failures = sum(1 for tc in cases if (tc.find('failure') is not None or tc.find('error') is not None))
|
||||
passed = total - failures
|
||||
|
||||
desired = ['NL-0','NL-1','NL-2','NL-3','NL-4','T-A','T-B','T-C','T-D','T-E','T-F','T-G','T-H','T-I','T-J']
|
||||
name_to_case = {(tc.get('name') or ''): tc for tc in cases}
|
||||
|
||||
def status_for(prefix: str):
|
||||
for name, tc in name_to_case.items():
|
||||
if name.startswith(prefix):
|
||||
return not ((tc.find('failure') is not None) or (tc.find('error') is not None))
|
||||
return None
|
||||
|
||||
lines = []
|
||||
lines += [
|
||||
'# Unity NL/T Editing Suite Test Results',
|
||||
'',
|
||||
f'Totals: {passed} passed, {failures} failed, {total} total',
|
||||
'',
|
||||
'## Test Checklist'
|
||||
]
|
||||
for p in desired:
|
||||
st = status_for(p)
|
||||
lines.append(f"- [x] {p}" if st is True else (f"- [ ] {p} (fail)" if st is False else f"- [ ] {p} (not run)"))
|
||||
lines.append('')
|
||||
|
||||
# Rich per-test system-out details
|
||||
lines.append('## Test Details')
|
||||
|
||||
def order_key(n: str):
|
||||
try:
|
||||
if n.startswith('NL-') and n[3].isdigit():
|
||||
return (0, int(n.split('.')[0].split('-')[1]))
|
||||
except Exception:
|
||||
pass
|
||||
if n.startswith('T-') and len(n) > 2 and n[2].isalpha():
|
||||
return (1, ord(n[2]))
|
||||
return (2, n)
|
||||
|
||||
MAX_CHARS = 2000
|
||||
for name in sorted(name_to_case.keys(), key=order_key):
|
||||
tc = name_to_case[name]
|
||||
status_badge = "PASS" if (tc.find('failure') is None and tc.find('error') is None) else "FAIL"
|
||||
lines.append(f"### {name} — {status_badge}")
|
||||
so = tc.find('system-out')
|
||||
text = '' if so is None or so.text is None else so.text.replace('\r\n','\n')
|
||||
# Unescape XML entities so code reads naturally (e.g., => instead of =>)
|
||||
if text:
|
||||
text = html.unescape(text)
|
||||
if text.strip():
|
||||
t = text.strip()
|
||||
if len(t) > MAX_CHARS:
|
||||
t = t[:MAX_CHARS] + "\n…(truncated)"
|
||||
# Use a safer fence if content contains triple backticks
|
||||
fence = '```'
|
||||
if '```' in t:
|
||||
fence = '````'
|
||||
lines.append(fence)
|
||||
lines.append(t)
|
||||
lines.append(fence)
|
||||
else:
|
||||
lines.append('(no system-out)')
|
||||
node = tc.find('failure') or tc.find('error')
|
||||
if node is not None:
|
||||
msg = (node.get('message') or '').strip()
|
||||
body = (node.text or '').strip()
|
||||
if msg: lines.append(f"- Message: {msg}")
|
||||
if body: lines.append(f"- Detail: {body.splitlines()[0][:500]}")
|
||||
lines.append('')
|
||||
|
||||
md_out.write_text('\n'.join(lines), encoding='utf-8')
|
||||
PY
|
||||
|
||||
- name: "Debug: list report files"
|
||||
if: always()
|
||||
shell: bash
|
||||
run: |
|
||||
set -eux
|
||||
ls -la reports || true
|
||||
shopt -s nullglob
|
||||
for f in reports/*.xml; do
|
||||
echo "===== $f ====="
|
||||
head -n 40 "$f" || true
|
||||
done
|
||||
|
||||
# ---------- Collect execution transcript (if present) ----------
|
||||
- name: Collect action execution transcript
|
||||
if: always()
|
||||
shell: bash
|
||||
run: |
|
||||
set -eux
|
||||
if [ -f "$RUNNER_TEMP/claude-execution-output.json" ]; then
|
||||
cp "$RUNNER_TEMP/claude-execution-output.json" reports/claude-execution-output.json
|
||||
elif [ -f "/home/runner/work/_temp/claude-execution-output.json" ]; then
|
||||
cp "/home/runner/work/_temp/claude-execution-output.json" reports/claude-execution-output.json
|
||||
fi
|
||||
|
||||
- name: Sanitize markdown (normalize newlines)
|
||||
if: always()
|
||||
run: |
|
||||
set -eu
|
||||
python3 - <<'PY'
|
||||
from pathlib import Path
|
||||
rp=Path('reports'); rp.mkdir(parents=True, exist_ok=True)
|
||||
for p in rp.glob('*.md'):
|
||||
b=p.read_bytes().replace(b'\x00', b'')
|
||||
s=b.decode('utf-8','replace').replace('\r\n','\n')
|
||||
p.write_text(s, encoding='utf-8', newline='\n')
|
||||
PY
|
||||
|
||||
- name: NL/T details → Job Summary
|
||||
if: always()
|
||||
run: |
|
||||
echo "## Unity NL/T Editing Suite — Summary" >> $GITHUB_STEP_SUMMARY
|
||||
python3 - <<'PY' >> $GITHUB_STEP_SUMMARY
|
||||
from pathlib import Path
|
||||
p = Path('reports/junit-nl-suite.md')
|
||||
if p.exists():
|
||||
text = p.read_bytes().decode('utf-8', 'replace')
|
||||
MAX = 65000
|
||||
print(text[:MAX])
|
||||
if len(text) > MAX:
|
||||
print("\n\n_…truncated; full report in artifacts._")
|
||||
else:
|
||||
print("_No markdown report found._")
|
||||
PY
|
||||
|
||||
- name: Fallback JUnit if missing
|
||||
if: always()
|
||||
run: |
|
||||
set -eu
|
||||
mkdir -p reports
|
||||
if [ ! -f "$JUNIT_OUT" ]; then
|
||||
printf '%s\n' \
|
||||
'<?xml version="1.0" encoding="UTF-8"?>' \
|
||||
'<testsuite name="UnityMCP.NL-T" tests="1" failures="1" time="0">' \
|
||||
' <testcase classname="UnityMCP.NL-T" name="NL-Suite.Execution" time="0.0">' \
|
||||
' <failure><![CDATA[No JUnit was produced by the NL suite step. See the step logs.]]></failure>' \
|
||||
' </testcase>' \
|
||||
'</testsuite>' \
|
||||
> "$JUNIT_OUT"
|
||||
fi
|
||||
|
||||
- name: Publish JUnit report
|
||||
if: always()
|
||||
uses: mikepenz/action-junit-report@v5
|
||||
with:
|
||||
report_paths: '${{ env.JUNIT_OUT }}'
|
||||
include_passed: true
|
||||
detailed_summary: true
|
||||
annotate_notice: true
|
||||
require_tests: false
|
||||
fail_on_parse_error: true
|
||||
|
||||
- name: Upload artifacts (reports + fragments + transcript)
|
||||
if: always()
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: claude-nl-suite-artifacts
|
||||
path: |
|
||||
${{ env.JUNIT_OUT }}
|
||||
${{ env.MD_OUT }}
|
||||
reports/*_results.xml
|
||||
reports/claude-execution-output.json
|
||||
retention-days: 7
|
||||
|
||||
# ---------- Always stop Unity ----------
|
||||
- name: Stop Unity
|
||||
if: always()
|
||||
run: |
|
||||
docker logs --tail 400 unity-mcp | sed -E 's/((serial|license|password|token)[^[:space:]]*)/[REDACTED]/ig' || true
|
||||
docker rm -f unity-mcp || true
|
||||
|
||||
|
|
@ -34,3 +34,5 @@ CONTRIBUTING.md.meta
|
|||
.vscode/
|
||||
.aider*
|
||||
.DS_Store*
|
||||
# Unity test project lock files
|
||||
TestProjects/UnityMCPTests/Packages/packages-lock.json
|
||||
|
|
|
|||
|
|
@ -66,6 +66,41 @@ To find it reliably:
|
|||
|
||||
Note: In recent builds, the Python server sources are also bundled inside the package under `UnityMcpServer~/src`. This is handy for local testing or pointing MCP clients directly at the packaged server.
|
||||
|
||||
## CI Test Workflow (GitHub Actions)
|
||||
|
||||
We provide a CI job to run a Natural Language Editing mini-suite against the Unity test project. It spins up a headless Unity container and connects via the MCP bridge.
|
||||
|
||||
- Trigger: Workflow dispatch (`Claude NL suite (Unity live)`).
|
||||
- Image: `UNITY_IMAGE` (UnityCI) pulled by tag; the job resolves a digest at runtime. Logs are sanitized.
|
||||
- Reports: JUnit at `reports/junit-nl-suite.xml`, Markdown at `reports/junit-nl-suite.md`.
|
||||
- Publishing: JUnit is normalized to `reports/junit-for-actions.xml` and published; artifacts upload all files under `reports/`.
|
||||
|
||||
### Test target script
|
||||
- The repo includes a long, standalone C# script used to exercise larger edits and windows:
|
||||
- `TestProjects/UnityMCPTests/Assets/Scripts/LongUnityScriptClaudeTest.cs`
|
||||
Use this file locally and in CI to validate multi-edit batches, anchor inserts, and windowed reads on a sizable script.
|
||||
|
||||
### Add a new NL test
|
||||
- Edit `.claude/prompts/nl-unity-claude-tests-mini.md` (or `nl-unity-suite-full.md` for the larger suite).
|
||||
- Follow the conventions: single `<testsuite>` root, one `<testcase>` per sub-test, end system-out with `VERDICT: PASS|FAIL`.
|
||||
- Keep edits minimal and reversible; include evidence windows and compact diffs.
|
||||
|
||||
### Run the suite
|
||||
1) Push your branch, then manually run the workflow from the Actions tab.
|
||||
2) The job writes reports into `reports/` and uploads artifacts.
|
||||
3) The “JUnit Test Report” check summarizes results; open the Job Summary for full markdown.
|
||||
|
||||
### View results
|
||||
- Job Summary: inline markdown summary of the run on the Actions tab in GitHub
|
||||
- Check: “JUnit Test Report” on the PR/commit.
|
||||
- Artifacts: `claude-nl-suite-artifacts` includes XML and MD.
|
||||
|
||||
|
||||
### MCP Connection Debugging
|
||||
- *Enable debug logs* in the Unity MCP window (inside the Editor) to view connection status, auto-setup results, and MCP client paths. It shows:
|
||||
- bridge startup/port, client connections, strict framing negotiation, and parsed frames
|
||||
- auto-config path detection (Windows/macOS/Linux), uv/claude resolution, and surfaced errors
|
||||
- In CI, the job tails Unity logs (redacted for serial/license/password/token) and prints socket/status JSON diagnostics if startup fails.
|
||||
## Workflow
|
||||
|
||||
1. **Make changes** to your source code in this directory
|
||||
|
|
|
|||
|
|
@ -43,6 +43,9 @@ MCP for Unity acts as a bridge, allowing AI assistants (like Claude, Cursor) to
|
|||
* `manage_shader`: Performs shader CRUD operations (create, read, modify, delete).
|
||||
* `manage_gameobject`: Manages GameObjects: create, modify, delete, find, and component operations.
|
||||
* `execute_menu_item`: Executes a menu item via its path (e.g., "File/Save Project").
|
||||
* `apply_text_edits`: Precise text edits with precondition hashes and atomic multi-edit batches.
|
||||
* `script_apply_edits`: Structured C# method/class edits (insert/replace/delete) with safer boundaries.
|
||||
* `validate_script`: Fast validation (basic/standard) to catch syntax/structure issues before/after writes.
|
||||
</details>
|
||||
|
||||
---
|
||||
|
|
|
|||
File diff suppressed because it is too large
Load Diff
|
|
@ -0,0 +1,2 @@
|
|||
fileFormatVersion: 2
|
||||
guid: dfbabf507ab1245178d1a8e745d8d283
|
||||
|
|
@ -1,417 +0,0 @@
|
|||
{
|
||||
"dependencies": {
|
||||
"com.coplaydev.unity-mcp": {
|
||||
"version": "file:../../../UnityMcpBridge",
|
||||
"depth": 0,
|
||||
"source": "local",
|
||||
"dependencies": {
|
||||
"com.unity.nuget.newtonsoft-json": "3.0.2"
|
||||
}
|
||||
},
|
||||
"com.unity.collab-proxy": {
|
||||
"version": "2.5.2",
|
||||
"depth": 0,
|
||||
"source": "registry",
|
||||
"dependencies": {},
|
||||
"url": "https://packages.unity.com"
|
||||
},
|
||||
"com.unity.editorcoroutines": {
|
||||
"version": "1.0.0",
|
||||
"depth": 1,
|
||||
"source": "registry",
|
||||
"dependencies": {},
|
||||
"url": "https://packages.unity.com"
|
||||
},
|
||||
"com.unity.ext.nunit": {
|
||||
"version": "1.0.6",
|
||||
"depth": 1,
|
||||
"source": "registry",
|
||||
"dependencies": {},
|
||||
"url": "https://packages.unity.com"
|
||||
},
|
||||
"com.unity.feature.development": {
|
||||
"version": "1.0.1",
|
||||
"depth": 0,
|
||||
"source": "builtin",
|
||||
"dependencies": {
|
||||
"com.unity.ide.visualstudio": "2.0.22",
|
||||
"com.unity.ide.rider": "3.0.31",
|
||||
"com.unity.ide.vscode": "1.2.5",
|
||||
"com.unity.editorcoroutines": "1.0.0",
|
||||
"com.unity.performance.profile-analyzer": "1.2.2",
|
||||
"com.unity.test-framework": "1.1.33",
|
||||
"com.unity.testtools.codecoverage": "1.2.6"
|
||||
}
|
||||
},
|
||||
"com.unity.ide.rider": {
|
||||
"version": "3.0.31",
|
||||
"depth": 0,
|
||||
"source": "registry",
|
||||
"dependencies": {
|
||||
"com.unity.ext.nunit": "1.0.6"
|
||||
},
|
||||
"url": "https://packages.unity.com"
|
||||
},
|
||||
"com.unity.ide.visualstudio": {
|
||||
"version": "2.0.22",
|
||||
"depth": 0,
|
||||
"source": "registry",
|
||||
"dependencies": {
|
||||
"com.unity.test-framework": "1.1.9"
|
||||
},
|
||||
"url": "https://packages.unity.com"
|
||||
},
|
||||
"com.unity.ide.vscode": {
|
||||
"version": "1.2.5",
|
||||
"depth": 0,
|
||||
"source": "registry",
|
||||
"dependencies": {},
|
||||
"url": "https://packages.unity.com"
|
||||
},
|
||||
"com.unity.ide.windsurf": {
|
||||
"version": "https://github.com/Asuta/com.unity.ide.windsurf.git",
|
||||
"depth": 0,
|
||||
"source": "git",
|
||||
"dependencies": {
|
||||
"com.unity.test-framework": "1.1.9"
|
||||
},
|
||||
"hash": "6161accf3e7beab96341813913e714c7e2fb5c5d"
|
||||
},
|
||||
"com.unity.nuget.newtonsoft-json": {
|
||||
"version": "3.2.1",
|
||||
"depth": 1,
|
||||
"source": "registry",
|
||||
"dependencies": {},
|
||||
"url": "https://packages.unity.com"
|
||||
},
|
||||
"com.unity.performance.profile-analyzer": {
|
||||
"version": "1.2.2",
|
||||
"depth": 1,
|
||||
"source": "registry",
|
||||
"dependencies": {},
|
||||
"url": "https://packages.unity.com"
|
||||
},
|
||||
"com.unity.settings-manager": {
|
||||
"version": "1.0.3",
|
||||
"depth": 2,
|
||||
"source": "registry",
|
||||
"dependencies": {},
|
||||
"url": "https://packages.unity.com"
|
||||
},
|
||||
"com.unity.test-framework": {
|
||||
"version": "1.1.33",
|
||||
"depth": 0,
|
||||
"source": "registry",
|
||||
"dependencies": {
|
||||
"com.unity.ext.nunit": "1.0.6",
|
||||
"com.unity.modules.imgui": "1.0.0",
|
||||
"com.unity.modules.jsonserialize": "1.0.0"
|
||||
},
|
||||
"url": "https://packages.unity.com"
|
||||
},
|
||||
"com.unity.testtools.codecoverage": {
|
||||
"version": "1.2.6",
|
||||
"depth": 1,
|
||||
"source": "registry",
|
||||
"dependencies": {
|
||||
"com.unity.test-framework": "1.0.16",
|
||||
"com.unity.settings-manager": "1.0.1"
|
||||
},
|
||||
"url": "https://packages.unity.com"
|
||||
},
|
||||
"com.unity.textmeshpro": {
|
||||
"version": "3.0.6",
|
||||
"depth": 0,
|
||||
"source": "registry",
|
||||
"dependencies": {
|
||||
"com.unity.ugui": "1.0.0"
|
||||
},
|
||||
"url": "https://packages.unity.com"
|
||||
},
|
||||
"com.unity.timeline": {
|
||||
"version": "1.6.5",
|
||||
"depth": 0,
|
||||
"source": "registry",
|
||||
"dependencies": {
|
||||
"com.unity.modules.audio": "1.0.0",
|
||||
"com.unity.modules.director": "1.0.0",
|
||||
"com.unity.modules.animation": "1.0.0",
|
||||
"com.unity.modules.particlesystem": "1.0.0"
|
||||
},
|
||||
"url": "https://packages.unity.com"
|
||||
},
|
||||
"com.unity.ugui": {
|
||||
"version": "1.0.0",
|
||||
"depth": 0,
|
||||
"source": "builtin",
|
||||
"dependencies": {
|
||||
"com.unity.modules.ui": "1.0.0",
|
||||
"com.unity.modules.imgui": "1.0.0"
|
||||
}
|
||||
},
|
||||
"com.unity.visualscripting": {
|
||||
"version": "1.9.4",
|
||||
"depth": 0,
|
||||
"source": "registry",
|
||||
"dependencies": {
|
||||
"com.unity.ugui": "1.0.0",
|
||||
"com.unity.modules.jsonserialize": "1.0.0"
|
||||
},
|
||||
"url": "https://packages.unity.com"
|
||||
},
|
||||
"com.unity.modules.ai": {
|
||||
"version": "1.0.0",
|
||||
"depth": 0,
|
||||
"source": "builtin",
|
||||
"dependencies": {}
|
||||
},
|
||||
"com.unity.modules.androidjni": {
|
||||
"version": "1.0.0",
|
||||
"depth": 0,
|
||||
"source": "builtin",
|
||||
"dependencies": {}
|
||||
},
|
||||
"com.unity.modules.animation": {
|
||||
"version": "1.0.0",
|
||||
"depth": 0,
|
||||
"source": "builtin",
|
||||
"dependencies": {}
|
||||
},
|
||||
"com.unity.modules.assetbundle": {
|
||||
"version": "1.0.0",
|
||||
"depth": 0,
|
||||
"source": "builtin",
|
||||
"dependencies": {}
|
||||
},
|
||||
"com.unity.modules.audio": {
|
||||
"version": "1.0.0",
|
||||
"depth": 0,
|
||||
"source": "builtin",
|
||||
"dependencies": {}
|
||||
},
|
||||
"com.unity.modules.cloth": {
|
||||
"version": "1.0.0",
|
||||
"depth": 0,
|
||||
"source": "builtin",
|
||||
"dependencies": {
|
||||
"com.unity.modules.physics": "1.0.0"
|
||||
}
|
||||
},
|
||||
"com.unity.modules.director": {
|
||||
"version": "1.0.0",
|
||||
"depth": 0,
|
||||
"source": "builtin",
|
||||
"dependencies": {
|
||||
"com.unity.modules.audio": "1.0.0",
|
||||
"com.unity.modules.animation": "1.0.0"
|
||||
}
|
||||
},
|
||||
"com.unity.modules.imageconversion": {
|
||||
"version": "1.0.0",
|
||||
"depth": 0,
|
||||
"source": "builtin",
|
||||
"dependencies": {}
|
||||
},
|
||||
"com.unity.modules.imgui": {
|
||||
"version": "1.0.0",
|
||||
"depth": 0,
|
||||
"source": "builtin",
|
||||
"dependencies": {}
|
||||
},
|
||||
"com.unity.modules.jsonserialize": {
|
||||
"version": "1.0.0",
|
||||
"depth": 0,
|
||||
"source": "builtin",
|
||||
"dependencies": {}
|
||||
},
|
||||
"com.unity.modules.particlesystem": {
|
||||
"version": "1.0.0",
|
||||
"depth": 0,
|
||||
"source": "builtin",
|
||||
"dependencies": {}
|
||||
},
|
||||
"com.unity.modules.physics": {
|
||||
"version": "1.0.0",
|
||||
"depth": 0,
|
||||
"source": "builtin",
|
||||
"dependencies": {}
|
||||
},
|
||||
"com.unity.modules.physics2d": {
|
||||
"version": "1.0.0",
|
||||
"depth": 0,
|
||||
"source": "builtin",
|
||||
"dependencies": {}
|
||||
},
|
||||
"com.unity.modules.screencapture": {
|
||||
"version": "1.0.0",
|
||||
"depth": 0,
|
||||
"source": "builtin",
|
||||
"dependencies": {
|
||||
"com.unity.modules.imageconversion": "1.0.0"
|
||||
}
|
||||
},
|
||||
"com.unity.modules.subsystems": {
|
||||
"version": "1.0.0",
|
||||
"depth": 1,
|
||||
"source": "builtin",
|
||||
"dependencies": {
|
||||
"com.unity.modules.jsonserialize": "1.0.0"
|
||||
}
|
||||
},
|
||||
"com.unity.modules.terrain": {
|
||||
"version": "1.0.0",
|
||||
"depth": 0,
|
||||
"source": "builtin",
|
||||
"dependencies": {}
|
||||
},
|
||||
"com.unity.modules.terrainphysics": {
|
||||
"version": "1.0.0",
|
||||
"depth": 0,
|
||||
"source": "builtin",
|
||||
"dependencies": {
|
||||
"com.unity.modules.physics": "1.0.0",
|
||||
"com.unity.modules.terrain": "1.0.0"
|
||||
}
|
||||
},
|
||||
"com.unity.modules.tilemap": {
|
||||
"version": "1.0.0",
|
||||
"depth": 0,
|
||||
"source": "builtin",
|
||||
"dependencies": {
|
||||
"com.unity.modules.physics2d": "1.0.0"
|
||||
}
|
||||
},
|
||||
"com.unity.modules.ui": {
|
||||
"version": "1.0.0",
|
||||
"depth": 0,
|
||||
"source": "builtin",
|
||||
"dependencies": {}
|
||||
},
|
||||
"com.unity.modules.uielements": {
|
||||
"version": "1.0.0",
|
||||
"depth": 0,
|
||||
"source": "builtin",
|
||||
"dependencies": {
|
||||
"com.unity.modules.ui": "1.0.0",
|
||||
"com.unity.modules.imgui": "1.0.0",
|
||||
"com.unity.modules.jsonserialize": "1.0.0",
|
||||
"com.unity.modules.uielementsnative": "1.0.0"
|
||||
}
|
||||
},
|
||||
"com.unity.modules.uielementsnative": {
|
||||
"version": "1.0.0",
|
||||
"depth": 1,
|
||||
"source": "builtin",
|
||||
"dependencies": {
|
||||
"com.unity.modules.ui": "1.0.0",
|
||||
"com.unity.modules.imgui": "1.0.0",
|
||||
"com.unity.modules.jsonserialize": "1.0.0"
|
||||
}
|
||||
},
|
||||
"com.unity.modules.umbra": {
|
||||
"version": "1.0.0",
|
||||
"depth": 0,
|
||||
"source": "builtin",
|
||||
"dependencies": {}
|
||||
},
|
||||
"com.unity.modules.unityanalytics": {
|
||||
"version": "1.0.0",
|
||||
"depth": 0,
|
||||
"source": "builtin",
|
||||
"dependencies": {
|
||||
"com.unity.modules.unitywebrequest": "1.0.0",
|
||||
"com.unity.modules.jsonserialize": "1.0.0"
|
||||
}
|
||||
},
|
||||
"com.unity.modules.unitywebrequest": {
|
||||
"version": "1.0.0",
|
||||
"depth": 0,
|
||||
"source": "builtin",
|
||||
"dependencies": {}
|
||||
},
|
||||
"com.unity.modules.unitywebrequestassetbundle": {
|
||||
"version": "1.0.0",
|
||||
"depth": 0,
|
||||
"source": "builtin",
|
||||
"dependencies": {
|
||||
"com.unity.modules.assetbundle": "1.0.0",
|
||||
"com.unity.modules.unitywebrequest": "1.0.0"
|
||||
}
|
||||
},
|
||||
"com.unity.modules.unitywebrequestaudio": {
|
||||
"version": "1.0.0",
|
||||
"depth": 0,
|
||||
"source": "builtin",
|
||||
"dependencies": {
|
||||
"com.unity.modules.unitywebrequest": "1.0.0",
|
||||
"com.unity.modules.audio": "1.0.0"
|
||||
}
|
||||
},
|
||||
"com.unity.modules.unitywebrequesttexture": {
|
||||
"version": "1.0.0",
|
||||
"depth": 0,
|
||||
"source": "builtin",
|
||||
"dependencies": {
|
||||
"com.unity.modules.unitywebrequest": "1.0.0",
|
||||
"com.unity.modules.imageconversion": "1.0.0"
|
||||
}
|
||||
},
|
||||
"com.unity.modules.unitywebrequestwww": {
|
||||
"version": "1.0.0",
|
||||
"depth": 0,
|
||||
"source": "builtin",
|
||||
"dependencies": {
|
||||
"com.unity.modules.unitywebrequest": "1.0.0",
|
||||
"com.unity.modules.unitywebrequestassetbundle": "1.0.0",
|
||||
"com.unity.modules.unitywebrequestaudio": "1.0.0",
|
||||
"com.unity.modules.audio": "1.0.0",
|
||||
"com.unity.modules.assetbundle": "1.0.0",
|
||||
"com.unity.modules.imageconversion": "1.0.0"
|
||||
}
|
||||
},
|
||||
"com.unity.modules.vehicles": {
|
||||
"version": "1.0.0",
|
||||
"depth": 0,
|
||||
"source": "builtin",
|
||||
"dependencies": {
|
||||
"com.unity.modules.physics": "1.0.0"
|
||||
}
|
||||
},
|
||||
"com.unity.modules.video": {
|
||||
"version": "1.0.0",
|
||||
"depth": 0,
|
||||
"source": "builtin",
|
||||
"dependencies": {
|
||||
"com.unity.modules.audio": "1.0.0",
|
||||
"com.unity.modules.ui": "1.0.0",
|
||||
"com.unity.modules.unitywebrequest": "1.0.0"
|
||||
}
|
||||
},
|
||||
"com.unity.modules.vr": {
|
||||
"version": "1.0.0",
|
||||
"depth": 0,
|
||||
"source": "builtin",
|
||||
"dependencies": {
|
||||
"com.unity.modules.jsonserialize": "1.0.0",
|
||||
"com.unity.modules.physics": "1.0.0",
|
||||
"com.unity.modules.xr": "1.0.0"
|
||||
}
|
||||
},
|
||||
"com.unity.modules.wind": {
|
||||
"version": "1.0.0",
|
||||
"depth": 0,
|
||||
"source": "builtin",
|
||||
"dependencies": {}
|
||||
},
|
||||
"com.unity.modules.xr": {
|
||||
"version": "1.0.0",
|
||||
"depth": 0,
|
||||
"source": "builtin",
|
||||
"dependencies": {
|
||||
"com.unity.modules.physics": "1.0.0",
|
||||
"com.unity.modules.jsonserialize": "1.0.0",
|
||||
"com.unity.modules.subsystems": "1.0.0"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
@ -1,6 +1,4 @@
|
|||
{
|
||||
"m_Name": "Settings",
|
||||
"m_Path": "ProjectSettings/Packages/com.unity.testtools.codecoverage/Settings.json",
|
||||
"m_Dictionary": {
|
||||
"m_DictionaryValues": []
|
||||
}
|
||||
|
|
|
|||
|
|
@ -19,6 +19,11 @@ namespace MCPForUnity.Editor.Data
|
|||
".cursor",
|
||||
"mcp.json"
|
||||
),
|
||||
macConfigPath = Path.Combine(
|
||||
Environment.GetFolderPath(Environment.SpecialFolder.UserProfile),
|
||||
".cursor",
|
||||
"mcp.json"
|
||||
),
|
||||
linuxConfigPath = Path.Combine(
|
||||
Environment.GetFolderPath(Environment.SpecialFolder.UserProfile),
|
||||
".cursor",
|
||||
|
|
@ -35,6 +40,10 @@ namespace MCPForUnity.Editor.Data
|
|||
Environment.GetFolderPath(Environment.SpecialFolder.UserProfile),
|
||||
".claude.json"
|
||||
),
|
||||
macConfigPath = Path.Combine(
|
||||
Environment.GetFolderPath(Environment.SpecialFolder.UserProfile),
|
||||
".claude.json"
|
||||
),
|
||||
linuxConfigPath = Path.Combine(
|
||||
Environment.GetFolderPath(Environment.SpecialFolder.UserProfile),
|
||||
".claude.json"
|
||||
|
|
@ -52,6 +61,12 @@ namespace MCPForUnity.Editor.Data
|
|||
"windsurf",
|
||||
"mcp_config.json"
|
||||
),
|
||||
macConfigPath = Path.Combine(
|
||||
Environment.GetFolderPath(Environment.SpecialFolder.UserProfile),
|
||||
".codeium",
|
||||
"windsurf",
|
||||
"mcp_config.json"
|
||||
),
|
||||
linuxConfigPath = Path.Combine(
|
||||
Environment.GetFolderPath(Environment.SpecialFolder.UserProfile),
|
||||
".codeium",
|
||||
|
|
@ -70,22 +85,21 @@ namespace MCPForUnity.Editor.Data
|
|||
"Claude",
|
||||
"claude_desktop_config.json"
|
||||
),
|
||||
// For macOS, Claude Desktop stores config under ~/Library/Application Support/Claude
|
||||
// For Linux, it remains under ~/.config/Claude
|
||||
linuxConfigPath = RuntimeInformation.IsOSPlatform(OSPlatform.OSX)
|
||||
? Path.Combine(
|
||||
Environment.GetFolderPath(Environment.SpecialFolder.UserProfile),
|
||||
"Library",
|
||||
"Application Support",
|
||||
"Claude",
|
||||
"claude_desktop_config.json"
|
||||
)
|
||||
: Path.Combine(
|
||||
Environment.GetFolderPath(Environment.SpecialFolder.UserProfile),
|
||||
".config",
|
||||
"Claude",
|
||||
"claude_desktop_config.json"
|
||||
),
|
||||
|
||||
macConfigPath = Path.Combine(
|
||||
Environment.GetFolderPath(Environment.SpecialFolder.UserProfile),
|
||||
"Library",
|
||||
"Application Support",
|
||||
"Claude",
|
||||
"claude_desktop_config.json"
|
||||
),
|
||||
linuxConfigPath = Path.Combine(
|
||||
Environment.GetFolderPath(Environment.SpecialFolder.UserProfile),
|
||||
".config",
|
||||
"Claude",
|
||||
"claude_desktop_config.json"
|
||||
),
|
||||
|
||||
mcpType = McpTypes.ClaudeDesktop,
|
||||
configStatus = "Not Configured",
|
||||
},
|
||||
|
|
@ -100,24 +114,23 @@ namespace MCPForUnity.Editor.Data
|
|||
"User",
|
||||
"mcp.json"
|
||||
),
|
||||
// For macOS, VSCode stores user config under ~/Library/Application Support/Code/User
|
||||
// For Linux, it remains under ~/.config/Code/User
|
||||
linuxConfigPath = RuntimeInformation.IsOSPlatform(OSPlatform.OSX)
|
||||
? Path.Combine(
|
||||
Environment.GetFolderPath(Environment.SpecialFolder.UserProfile),
|
||||
"Library",
|
||||
"Application Support",
|
||||
"Code",
|
||||
"User",
|
||||
"mcp.json"
|
||||
)
|
||||
: Path.Combine(
|
||||
Environment.GetFolderPath(Environment.SpecialFolder.UserProfile),
|
||||
".config",
|
||||
"Code",
|
||||
"User",
|
||||
"mcp.json"
|
||||
),
|
||||
// macOS: ~/Library/Application Support/Code/User/mcp.json
|
||||
macConfigPath = Path.Combine(
|
||||
Environment.GetFolderPath(Environment.SpecialFolder.UserProfile),
|
||||
"Library",
|
||||
"Application Support",
|
||||
"Code",
|
||||
"User",
|
||||
"mcp.json"
|
||||
),
|
||||
// Linux: ~/.config/Code/User/mcp.json
|
||||
linuxConfigPath = Path.Combine(
|
||||
Environment.GetFolderPath(Environment.SpecialFolder.UserProfile),
|
||||
".config",
|
||||
"Code",
|
||||
"User",
|
||||
"mcp.json"
|
||||
),
|
||||
mcpType = McpTypes.VSCode,
|
||||
configStatus = "Not Configured",
|
||||
},
|
||||
|
|
@ -131,6 +144,12 @@ namespace MCPForUnity.Editor.Data
|
|||
"settings",
|
||||
"mcp.json"
|
||||
),
|
||||
macConfigPath = Path.Combine(
|
||||
Environment.GetFolderPath(Environment.SpecialFolder.UserProfile),
|
||||
".kiro",
|
||||
"settings",
|
||||
"mcp.json"
|
||||
),
|
||||
linuxConfigPath = Path.Combine(
|
||||
Environment.GetFolderPath(Environment.SpecialFolder.UserProfile),
|
||||
".kiro",
|
||||
|
|
|
|||
|
|
@ -54,7 +54,7 @@ namespace MCPForUnity.Editor.Helpers
|
|||
// For Cursor (non-VSCode) on macOS, prefer a no-spaces symlink path to avoid arg parsing issues in some runners
|
||||
string effectiveDir = directory;
|
||||
#if UNITY_EDITOR_OSX || UNITY_STANDALONE_OSX
|
||||
bool isCursor = !isVSCode && (client == null || client.mcpType != Models.McpTypes.VSCode);
|
||||
bool isCursor = !isVSCode && (client == null || client.mcpType != McpTypes.VSCode);
|
||||
if (isCursor && !string.IsNullOrEmpty(directory))
|
||||
{
|
||||
// Replace canonical path segment with the symlink path if present
|
||||
|
|
@ -65,7 +65,11 @@ namespace MCPForUnity.Editor.Helpers
|
|||
// Normalize to full path style
|
||||
if (directory.Contains(canonical))
|
||||
{
|
||||
effectiveDir = directory.Replace(canonical, symlinkSeg);
|
||||
var candidate = directory.Replace(canonical, symlinkSeg).Replace('\\', '/');
|
||||
if (System.IO.Directory.Exists(candidate))
|
||||
{
|
||||
effectiveDir = candidate;
|
||||
}
|
||||
}
|
||||
else
|
||||
{
|
||||
|
|
@ -76,7 +80,11 @@ namespace MCPForUnity.Editor.Helpers
|
|||
{
|
||||
string home = System.Environment.GetFolderPath(System.Environment.SpecialFolder.Personal) ?? string.Empty;
|
||||
string suffix = norm.Substring(idx + "/.local/share/".Length); // UnityMCP/...
|
||||
effectiveDir = System.IO.Path.Combine(home, "Library", "AppSupport", suffix).Replace('\\', '/');
|
||||
string candidate = System.IO.Path.Combine(home, "Library", "AppSupport", suffix).Replace('\\', '/');
|
||||
if (System.IO.Directory.Exists(candidate))
|
||||
{
|
||||
effectiveDir = candidate;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -25,19 +25,32 @@ namespace MCPForUnity.Editor.Helpers
|
|||
|
||||
if (!EditorPrefs.GetBool(key, false) || legacyPresent || canonicalMissing)
|
||||
{
|
||||
// Marshal the entire flow to the main thread. EnsureServerInstalled may touch Unity APIs.
|
||||
EditorApplication.delayCall += () =>
|
||||
{
|
||||
string error = null;
|
||||
System.Exception capturedEx = null;
|
||||
try
|
||||
{
|
||||
// Ensure any UnityEditor API usage inside runs on the main thread
|
||||
ServerInstaller.EnsureServerInstalled();
|
||||
}
|
||||
catch (System.Exception ex)
|
||||
{
|
||||
Debug.LogWarning("MCP for Unity: Auto-detect on load failed: " + ex.Message);
|
||||
error = ex.Message;
|
||||
capturedEx = ex;
|
||||
}
|
||||
finally
|
||||
|
||||
// Unity APIs must stay on main thread
|
||||
try { EditorPrefs.SetBool(key, true); } catch { }
|
||||
// Ensure prefs cleanup happens on main thread
|
||||
try { EditorPrefs.DeleteKey("MCPForUnity.ServerSrc"); } catch { }
|
||||
try { EditorPrefs.DeleteKey("MCPForUnity.PythonDirOverride"); } catch { }
|
||||
|
||||
if (!string.IsNullOrEmpty(error))
|
||||
{
|
||||
EditorPrefs.SetBool(key, true);
|
||||
Debug.LogWarning($"MCP for Unity: Auto-detect on load failed: {capturedEx}");
|
||||
// Alternatively: Debug.LogException(capturedEx);
|
||||
}
|
||||
};
|
||||
}
|
||||
|
|
|
|||
|
|
@ -35,10 +35,10 @@ namespace MCPForUnity.Editor.Helpers
|
|||
/// <summary>
|
||||
/// Creates a standardized error response object.
|
||||
/// </summary>
|
||||
/// <param name="errorMessage">A message describing the error.</param>
|
||||
/// <param name="errorCodeOrMessage">A message describing the error.</param>
|
||||
/// <param name="data">Optional additional data (e.g., error details) to include.</param>
|
||||
/// <returns>An object representing the error response.</returns>
|
||||
public static object Error(string errorMessage, object data = null)
|
||||
public static object Error(string errorCodeOrMessage, object data = null)
|
||||
{
|
||||
if (data != null)
|
||||
{
|
||||
|
|
@ -46,13 +46,16 @@ namespace MCPForUnity.Editor.Helpers
|
|||
return new
|
||||
{
|
||||
success = false,
|
||||
error = errorMessage,
|
||||
// Preserve original behavior while adding a machine-parsable code field.
|
||||
// If callers pass a code string, it will be echoed in both code and error.
|
||||
code = errorCodeOrMessage,
|
||||
error = errorCodeOrMessage,
|
||||
data = data,
|
||||
};
|
||||
}
|
||||
else
|
||||
{
|
||||
return new { success = false, error = errorMessage };
|
||||
return new { success = false, code = errorCodeOrMessage, error = errorCodeOrMessage };
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -35,6 +35,8 @@ namespace MCPForUnity.Editor
|
|||
> commandQueue = new();
|
||||
private static int currentUnityPort = 6400; // Dynamic port, starts with default
|
||||
private static bool isAutoConnectMode = false;
|
||||
private const ulong MaxFrameBytes = 64UL * 1024 * 1024; // 64 MiB hard cap for framed payloads
|
||||
private const int FrameIOTimeoutMs = 30000; // Per-read timeout to avoid stalled clients
|
||||
|
||||
// Debug helpers
|
||||
private static bool IsDebugEnabled()
|
||||
|
|
@ -96,8 +98,9 @@ namespace MCPForUnity.Editor
|
|||
|
||||
static MCPForUnityBridge()
|
||||
{
|
||||
// Skip bridge in headless/batch environments (CI/builds)
|
||||
if (Application.isBatchMode)
|
||||
// Skip bridge in headless/batch environments (CI/builds) unless explicitly allowed via env
|
||||
// CI override: set UNITY_MCP_ALLOW_BATCH=1 to allow the bridge in batch mode
|
||||
if (Application.isBatchMode && string.IsNullOrWhiteSpace(Environment.GetEnvironmentVariable("UNITY_MCP_ALLOW_BATCH")))
|
||||
{
|
||||
return;
|
||||
}
|
||||
|
|
@ -341,7 +344,7 @@ namespace MCPForUnity.Editor
|
|||
// Mark as stopping early to avoid accept logging during disposal
|
||||
isRunning = false;
|
||||
// Mark heartbeat one last time before stopping
|
||||
WriteHeartbeat(false);
|
||||
WriteHeartbeat(false, "stopped");
|
||||
listener?.Stop();
|
||||
listener = null;
|
||||
EditorApplication.update -= ProcessCommands;
|
||||
|
|
@ -397,22 +400,50 @@ namespace MCPForUnity.Editor
|
|||
using (client)
|
||||
using (NetworkStream stream = client.GetStream())
|
||||
{
|
||||
byte[] buffer = new byte[8192];
|
||||
// Framed I/O only; legacy mode removed
|
||||
try
|
||||
{
|
||||
var ep = client.Client?.RemoteEndPoint?.ToString() ?? "unknown";
|
||||
Debug.Log($"<b><color=#2EA3FF>UNITY-MCP</color></b>: Client connected {ep}");
|
||||
}
|
||||
catch { }
|
||||
// Strict framing: always require FRAMING=1 and frame all I/O
|
||||
try
|
||||
{
|
||||
client.NoDelay = true;
|
||||
}
|
||||
catch { }
|
||||
try
|
||||
{
|
||||
string handshake = "WELCOME UNITY-MCP 1 FRAMING=1\n";
|
||||
byte[] handshakeBytes = System.Text.Encoding.ASCII.GetBytes(handshake);
|
||||
using var cts = new CancellationTokenSource(FrameIOTimeoutMs);
|
||||
#if NETSTANDARD2_1 || NET6_0_OR_GREATER
|
||||
await stream.WriteAsync(handshakeBytes.AsMemory(0, handshakeBytes.Length), cts.Token).ConfigureAwait(false);
|
||||
#else
|
||||
await stream.WriteAsync(handshakeBytes, 0, handshakeBytes.Length, cts.Token).ConfigureAwait(false);
|
||||
#endif
|
||||
Debug.Log("<b><color=#2EA3FF>UNITY-MCP</color></b>: Sent handshake FRAMING=1 (strict)");
|
||||
}
|
||||
catch (Exception ex)
|
||||
{
|
||||
Debug.LogWarning($"<b><color=#2EA3FF>UNITY-MCP</color></b>: Handshake failed: {ex.Message}");
|
||||
return; // abort this client
|
||||
}
|
||||
|
||||
while (isRunning)
|
||||
{
|
||||
try
|
||||
{
|
||||
int bytesRead = await stream.ReadAsync(buffer, 0, buffer.Length);
|
||||
if (bytesRead == 0)
|
||||
{
|
||||
break; // Client disconnected
|
||||
}
|
||||
// Strict framed mode only: enforced framed I/O for this connection
|
||||
string commandText = await ReadFrameAsUtf8Async(stream, FrameIOTimeoutMs);
|
||||
|
||||
string commandText = System.Text.Encoding.UTF8.GetString(
|
||||
buffer,
|
||||
0,
|
||||
bytesRead
|
||||
);
|
||||
try
|
||||
{
|
||||
var preview = commandText.Length > 120 ? commandText.Substring(0, 120) + "…" : commandText;
|
||||
Debug.Log($"<b><color=#2EA3FF>UNITY-MCP</color></b>: recv framed: {preview}");
|
||||
}
|
||||
catch { }
|
||||
string commandId = Guid.NewGuid().ToString();
|
||||
TaskCompletionSource<string> tcs = new();
|
||||
|
||||
|
|
@ -424,7 +455,7 @@ namespace MCPForUnity.Editor
|
|||
/*lang=json,strict*/
|
||||
"{\"status\":\"success\",\"result\":{\"message\":\"pong\"}}"
|
||||
);
|
||||
await stream.WriteAsync(pingResponseBytes, 0, pingResponseBytes.Length);
|
||||
await WriteFrameAsync(stream, pingResponseBytes);
|
||||
continue;
|
||||
}
|
||||
|
||||
|
|
@ -435,7 +466,7 @@ namespace MCPForUnity.Editor
|
|||
|
||||
string response = await tcs.Task;
|
||||
byte[] responseBytes = System.Text.Encoding.UTF8.GetBytes(response);
|
||||
await stream.WriteAsync(responseBytes, 0, responseBytes.Length);
|
||||
await WriteFrameAsync(stream, responseBytes);
|
||||
}
|
||||
catch (Exception ex)
|
||||
{
|
||||
|
|
@ -446,120 +477,240 @@ namespace MCPForUnity.Editor
|
|||
}
|
||||
}
|
||||
|
||||
private static void ProcessCommands()
|
||||
// Timeout-aware exact read helper with cancellation; avoids indefinite stalls and background task leaks
|
||||
private static async System.Threading.Tasks.Task<byte[]> ReadExactAsync(NetworkStream stream, int count, int timeoutMs, CancellationToken cancel = default)
|
||||
{
|
||||
List<string> processedIds = new();
|
||||
lock (lockObj)
|
||||
byte[] buffer = new byte[count];
|
||||
int offset = 0;
|
||||
var stopwatch = System.Diagnostics.Stopwatch.StartNew();
|
||||
|
||||
while (offset < count)
|
||||
{
|
||||
// Periodic heartbeat while editor is idle/processing
|
||||
double now = EditorApplication.timeSinceStartup;
|
||||
if (now >= nextHeartbeatAt)
|
||||
int remaining = count - offset;
|
||||
int remainingTimeout = timeoutMs <= 0
|
||||
? Timeout.Infinite
|
||||
: timeoutMs - (int)stopwatch.ElapsedMilliseconds;
|
||||
|
||||
// If a finite timeout is configured and already elapsed, fail immediately
|
||||
if (remainingTimeout != Timeout.Infinite && remainingTimeout <= 0)
|
||||
{
|
||||
WriteHeartbeat(false);
|
||||
nextHeartbeatAt = now + 0.5f;
|
||||
throw new System.IO.IOException("Read timed out");
|
||||
}
|
||||
|
||||
foreach (
|
||||
KeyValuePair<
|
||||
string,
|
||||
(string commandJson, TaskCompletionSource<string> tcs)
|
||||
> kvp in commandQueue.ToList()
|
||||
)
|
||||
using var cts = CancellationTokenSource.CreateLinkedTokenSource(cancel);
|
||||
if (remainingTimeout != Timeout.Infinite)
|
||||
{
|
||||
string id = kvp.Key;
|
||||
string commandText = kvp.Value.commandJson;
|
||||
TaskCompletionSource<string> tcs = kvp.Value.tcs;
|
||||
cts.CancelAfter(remainingTimeout);
|
||||
}
|
||||
|
||||
try
|
||||
try
|
||||
{
|
||||
#if NETSTANDARD2_1 || NET6_0_OR_GREATER
|
||||
int read = await stream.ReadAsync(buffer.AsMemory(offset, remaining), cts.Token).ConfigureAwait(false);
|
||||
#else
|
||||
int read = await stream.ReadAsync(buffer, offset, remaining, cts.Token).ConfigureAwait(false);
|
||||
#endif
|
||||
if (read == 0)
|
||||
{
|
||||
// Special case handling
|
||||
if (string.IsNullOrEmpty(commandText))
|
||||
{
|
||||
var emptyResponse = new
|
||||
{
|
||||
status = "error",
|
||||
error = "Empty command received",
|
||||
};
|
||||
tcs.SetResult(JsonConvert.SerializeObject(emptyResponse));
|
||||
processedIds.Add(id);
|
||||
continue;
|
||||
}
|
||||
|
||||
// Trim the command text to remove any whitespace
|
||||
commandText = commandText.Trim();
|
||||
|
||||
// Non-JSON direct commands handling (like ping)
|
||||
if (commandText == "ping")
|
||||
{
|
||||
var pingResponse = new
|
||||
{
|
||||
status = "success",
|
||||
result = new { message = "pong" },
|
||||
};
|
||||
tcs.SetResult(JsonConvert.SerializeObject(pingResponse));
|
||||
processedIds.Add(id);
|
||||
continue;
|
||||
}
|
||||
|
||||
// Check if the command is valid JSON before attempting to deserialize
|
||||
if (!IsValidJson(commandText))
|
||||
{
|
||||
var invalidJsonResponse = new
|
||||
{
|
||||
status = "error",
|
||||
error = "Invalid JSON format",
|
||||
receivedText = commandText.Length > 50
|
||||
? commandText[..50] + "..."
|
||||
: commandText,
|
||||
};
|
||||
tcs.SetResult(JsonConvert.SerializeObject(invalidJsonResponse));
|
||||
processedIds.Add(id);
|
||||
continue;
|
||||
}
|
||||
|
||||
// Normal JSON command processing
|
||||
Command command = JsonConvert.DeserializeObject<Command>(commandText);
|
||||
|
||||
if (command == null)
|
||||
{
|
||||
var nullCommandResponse = new
|
||||
{
|
||||
status = "error",
|
||||
error = "Command deserialized to null",
|
||||
details = "The command was valid JSON but could not be deserialized to a Command object",
|
||||
};
|
||||
tcs.SetResult(JsonConvert.SerializeObject(nullCommandResponse));
|
||||
}
|
||||
else
|
||||
{
|
||||
string responseJson = ExecuteCommand(command);
|
||||
tcs.SetResult(responseJson);
|
||||
}
|
||||
throw new System.IO.IOException("Connection closed before reading expected bytes");
|
||||
}
|
||||
catch (Exception ex)
|
||||
{
|
||||
Debug.LogError($"Error processing command: {ex.Message}\n{ex.StackTrace}");
|
||||
offset += read;
|
||||
}
|
||||
catch (OperationCanceledException) when (!cancel.IsCancellationRequested)
|
||||
{
|
||||
throw new System.IO.IOException("Read timed out");
|
||||
}
|
||||
}
|
||||
|
||||
var response = new
|
||||
return buffer;
|
||||
}
|
||||
|
||||
private static async System.Threading.Tasks.Task WriteFrameAsync(NetworkStream stream, byte[] payload)
|
||||
{
|
||||
using var cts = new CancellationTokenSource(FrameIOTimeoutMs);
|
||||
await WriteFrameAsync(stream, payload, cts.Token);
|
||||
}
|
||||
|
||||
private static async System.Threading.Tasks.Task WriteFrameAsync(NetworkStream stream, byte[] payload, CancellationToken cancel)
|
||||
{
|
||||
if (payload == null)
|
||||
{
|
||||
throw new System.ArgumentNullException(nameof(payload));
|
||||
}
|
||||
if ((ulong)payload.LongLength > MaxFrameBytes)
|
||||
{
|
||||
throw new System.IO.IOException($"Frame too large: {payload.LongLength}");
|
||||
}
|
||||
byte[] header = new byte[8];
|
||||
WriteUInt64BigEndian(header, (ulong)payload.LongLength);
|
||||
#if NETSTANDARD2_1 || NET6_0_OR_GREATER
|
||||
await stream.WriteAsync(header.AsMemory(0, header.Length), cancel).ConfigureAwait(false);
|
||||
await stream.WriteAsync(payload.AsMemory(0, payload.Length), cancel).ConfigureAwait(false);
|
||||
#else
|
||||
await stream.WriteAsync(header, 0, header.Length, cancel).ConfigureAwait(false);
|
||||
await stream.WriteAsync(payload, 0, payload.Length, cancel).ConfigureAwait(false);
|
||||
#endif
|
||||
}
|
||||
|
||||
private static async System.Threading.Tasks.Task<string> ReadFrameAsUtf8Async(NetworkStream stream, int timeoutMs)
|
||||
{
|
||||
byte[] header = await ReadExactAsync(stream, 8, timeoutMs);
|
||||
ulong payloadLen = ReadUInt64BigEndian(header);
|
||||
if (payloadLen > MaxFrameBytes)
|
||||
{
|
||||
throw new System.IO.IOException($"Invalid framed length: {payloadLen}");
|
||||
}
|
||||
if (payloadLen == 0UL)
|
||||
throw new System.IO.IOException("Zero-length frames are not allowed");
|
||||
if (payloadLen > int.MaxValue)
|
||||
{
|
||||
throw new System.IO.IOException("Frame too large for buffer");
|
||||
}
|
||||
int count = (int)payloadLen;
|
||||
byte[] payload = await ReadExactAsync(stream, count, timeoutMs);
|
||||
return System.Text.Encoding.UTF8.GetString(payload);
|
||||
}
|
||||
|
||||
private static ulong ReadUInt64BigEndian(byte[] buffer)
|
||||
{
|
||||
if (buffer == null || buffer.Length < 8) return 0UL;
|
||||
return ((ulong)buffer[0] << 56)
|
||||
| ((ulong)buffer[1] << 48)
|
||||
| ((ulong)buffer[2] << 40)
|
||||
| ((ulong)buffer[3] << 32)
|
||||
| ((ulong)buffer[4] << 24)
|
||||
| ((ulong)buffer[5] << 16)
|
||||
| ((ulong)buffer[6] << 8)
|
||||
| buffer[7];
|
||||
}
|
||||
|
||||
private static void WriteUInt64BigEndian(byte[] dest, ulong value)
|
||||
{
|
||||
if (dest == null || dest.Length < 8)
|
||||
{
|
||||
throw new System.ArgumentException("Destination buffer too small for UInt64");
|
||||
}
|
||||
dest[0] = (byte)(value >> 56);
|
||||
dest[1] = (byte)(value >> 48);
|
||||
dest[2] = (byte)(value >> 40);
|
||||
dest[3] = (byte)(value >> 32);
|
||||
dest[4] = (byte)(value >> 24);
|
||||
dest[5] = (byte)(value >> 16);
|
||||
dest[6] = (byte)(value >> 8);
|
||||
dest[7] = (byte)(value);
|
||||
}
|
||||
|
||||
private static void ProcessCommands()
|
||||
{
|
||||
// Heartbeat without holding the queue lock
|
||||
double now = EditorApplication.timeSinceStartup;
|
||||
if (now >= nextHeartbeatAt)
|
||||
{
|
||||
WriteHeartbeat(false);
|
||||
nextHeartbeatAt = now + 0.5f;
|
||||
}
|
||||
|
||||
// Snapshot under lock, then process outside to reduce contention
|
||||
List<(string id, string text, TaskCompletionSource<string> tcs)> work;
|
||||
lock (lockObj)
|
||||
{
|
||||
work = commandQueue
|
||||
.Select(kvp => (kvp.Key, kvp.Value.commandJson, kvp.Value.tcs))
|
||||
.ToList();
|
||||
}
|
||||
|
||||
foreach (var item in work)
|
||||
{
|
||||
string id = item.id;
|
||||
string commandText = item.text;
|
||||
TaskCompletionSource<string> tcs = item.tcs;
|
||||
|
||||
try
|
||||
{
|
||||
// Special case handling
|
||||
if (string.IsNullOrEmpty(commandText))
|
||||
{
|
||||
var emptyResponse = new
|
||||
{
|
||||
status = "error",
|
||||
error = ex.Message,
|
||||
commandType = "Unknown (error during processing)",
|
||||
receivedText = commandText?.Length > 50
|
||||
error = "Empty command received",
|
||||
};
|
||||
tcs.SetResult(JsonConvert.SerializeObject(emptyResponse));
|
||||
// Remove quickly under lock
|
||||
lock (lockObj) { commandQueue.Remove(id); }
|
||||
continue;
|
||||
}
|
||||
|
||||
// Trim the command text to remove any whitespace
|
||||
commandText = commandText.Trim();
|
||||
|
||||
// Non-JSON direct commands handling (like ping)
|
||||
if (commandText == "ping")
|
||||
{
|
||||
var pingResponse = new
|
||||
{
|
||||
status = "success",
|
||||
result = new { message = "pong" },
|
||||
};
|
||||
tcs.SetResult(JsonConvert.SerializeObject(pingResponse));
|
||||
lock (lockObj) { commandQueue.Remove(id); }
|
||||
continue;
|
||||
}
|
||||
|
||||
// Check if the command is valid JSON before attempting to deserialize
|
||||
if (!IsValidJson(commandText))
|
||||
{
|
||||
var invalidJsonResponse = new
|
||||
{
|
||||
status = "error",
|
||||
error = "Invalid JSON format",
|
||||
receivedText = commandText.Length > 50
|
||||
? commandText[..50] + "..."
|
||||
: commandText,
|
||||
};
|
||||
string responseJson = JsonConvert.SerializeObject(response);
|
||||
tcs.SetResult(responseJson);
|
||||
tcs.SetResult(JsonConvert.SerializeObject(invalidJsonResponse));
|
||||
lock (lockObj) { commandQueue.Remove(id); }
|
||||
continue;
|
||||
}
|
||||
|
||||
processedIds.Add(id);
|
||||
// Normal JSON command processing
|
||||
Command command = JsonConvert.DeserializeObject<Command>(commandText);
|
||||
|
||||
if (command == null)
|
||||
{
|
||||
var nullCommandResponse = new
|
||||
{
|
||||
status = "error",
|
||||
error = "Command deserialized to null",
|
||||
details = "The command was valid JSON but could not be deserialized to a Command object",
|
||||
};
|
||||
tcs.SetResult(JsonConvert.SerializeObject(nullCommandResponse));
|
||||
}
|
||||
else
|
||||
{
|
||||
string responseJson = ExecuteCommand(command);
|
||||
tcs.SetResult(responseJson);
|
||||
}
|
||||
}
|
||||
catch (Exception ex)
|
||||
{
|
||||
Debug.LogError($"Error processing command: {ex.Message}\n{ex.StackTrace}");
|
||||
|
||||
var response = new
|
||||
{
|
||||
status = "error",
|
||||
error = ex.Message,
|
||||
commandType = "Unknown (error during processing)",
|
||||
receivedText = commandText?.Length > 50
|
||||
? commandText[..50] + "..."
|
||||
: commandText,
|
||||
};
|
||||
string responseJson = JsonConvert.SerializeObject(response);
|
||||
tcs.SetResult(responseJson);
|
||||
}
|
||||
|
||||
foreach (string id in processedIds)
|
||||
{
|
||||
commandQueue.Remove(id);
|
||||
}
|
||||
// Remove quickly under lock
|
||||
lock (lockObj) { commandQueue.Remove(id); }
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -709,7 +860,12 @@ namespace MCPForUnity.Editor
|
|||
{
|
||||
try
|
||||
{
|
||||
string dir = Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.UserProfile), ".unity-mcp");
|
||||
// Allow override of status directory (useful in CI/containers)
|
||||
string dir = Environment.GetEnvironmentVariable("UNITY_MCP_STATUS_DIR");
|
||||
if (string.IsNullOrWhiteSpace(dir))
|
||||
{
|
||||
dir = Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.UserProfile), ".unity-mcp");
|
||||
}
|
||||
Directory.CreateDirectory(dir);
|
||||
string filePath = Path.Combine(dir, $"unity-mcp-status-{ComputeProjectHash(Application.dataPath)}.json");
|
||||
var payload = new
|
||||
|
|
|
|||
|
|
@ -4,8 +4,8 @@ namespace MCPForUnity.Editor.Models
|
|||
{
|
||||
public string name;
|
||||
public string windowsConfigPath;
|
||||
public string macConfigPath;
|
||||
public string linuxConfigPath;
|
||||
public string macConfigPath; // optional macOS-specific config path
|
||||
public McpTypes mcpType;
|
||||
public string configStatus;
|
||||
public McpStatus status = McpStatus.NotConfigured;
|
||||
|
|
|
|||
|
|
@ -1,6 +1,7 @@
|
|||
using System;
|
||||
using System.Collections.Generic;
|
||||
using System.Linq;
|
||||
using System.IO;
|
||||
using Newtonsoft.Json.Linq;
|
||||
using UnityEditor;
|
||||
using UnityEditorInternal; // Required for tag management
|
||||
|
|
@ -89,6 +90,8 @@ namespace MCPForUnity.Editor.Tools
|
|||
// Editor State/Info
|
||||
case "get_state":
|
||||
return GetEditorState();
|
||||
case "get_project_root":
|
||||
return GetProjectRoot();
|
||||
case "get_windows":
|
||||
return GetEditorWindows();
|
||||
case "get_active_tool":
|
||||
|
|
@ -137,7 +140,7 @@ namespace MCPForUnity.Editor.Tools
|
|||
|
||||
default:
|
||||
return Response.Error(
|
||||
$"Unknown action: '{action}'. Supported actions include play, pause, stop, get_state, get_windows, get_active_tool, get_selection, set_active_tool, add_tag, remove_tag, get_tags, add_layer, remove_layer, get_layers."
|
||||
$"Unknown action: '{action}'. Supported actions include play, pause, stop, get_state, get_project_root, get_windows, get_active_tool, get_selection, set_active_tool, add_tag, remove_tag, get_tags, add_layer, remove_layer, get_layers."
|
||||
);
|
||||
}
|
||||
}
|
||||
|
|
@ -165,6 +168,25 @@ namespace MCPForUnity.Editor.Tools
|
|||
}
|
||||
}
|
||||
|
||||
private static object GetProjectRoot()
|
||||
{
|
||||
try
|
||||
{
|
||||
// Application.dataPath points to <Project>/Assets
|
||||
string assetsPath = Application.dataPath.Replace('\\', '/');
|
||||
string projectRoot = Directory.GetParent(assetsPath)?.FullName.Replace('\\', '/');
|
||||
if (string.IsNullOrEmpty(projectRoot))
|
||||
{
|
||||
return Response.Error("Could not determine project root from Application.dataPath");
|
||||
}
|
||||
return Response.Success("Project root resolved.", new { projectRoot });
|
||||
}
|
||||
catch (Exception e)
|
||||
{
|
||||
return Response.Error($"Error getting project root: {e.Message}");
|
||||
}
|
||||
}
|
||||
|
||||
private static object GetEditorWindows()
|
||||
{
|
||||
try
|
||||
|
|
|
|||
File diff suppressed because it is too large
Load Diff
|
|
@ -723,9 +723,8 @@ namespace MCPForUnity.Editor.Windows
|
|||
string na = System.IO.Path.GetFullPath(a.Trim());
|
||||
string nb = System.IO.Path.GetFullPath(b.Trim());
|
||||
if (System.Runtime.InteropServices.RuntimeInformation.IsOSPlatform(System.Runtime.InteropServices.OSPlatform.Windows))
|
||||
{
|
||||
return string.Equals(na, nb, StringComparison.OrdinalIgnoreCase);
|
||||
}
|
||||
// Default to ordinal on Unix; optionally detect FS case-sensitivity at runtime if needed
|
||||
return string.Equals(na, nb, StringComparison.Ordinal);
|
||||
}
|
||||
catch { return false; }
|
||||
|
|
@ -758,22 +757,112 @@ namespace MCPForUnity.Editor.Windows
|
|||
|
||||
private static bool VerifyBridgePing(int port)
|
||||
{
|
||||
// Use strict framed protocol to match bridge (FRAMING=1)
|
||||
const int ConnectTimeoutMs = 1000;
|
||||
const int FrameTimeoutMs = 30000; // match bridge frame I/O timeout
|
||||
|
||||
try
|
||||
{
|
||||
using TcpClient c = new TcpClient();
|
||||
var task = c.ConnectAsync(IPAddress.Loopback, port);
|
||||
if (!task.Wait(500)) return false;
|
||||
using NetworkStream s = c.GetStream();
|
||||
byte[] ping = Encoding.UTF8.GetBytes("ping");
|
||||
s.Write(ping, 0, ping.Length);
|
||||
s.ReadTimeout = 1000;
|
||||
byte[] buf = new byte[256];
|
||||
int n = s.Read(buf, 0, buf.Length);
|
||||
if (n <= 0) return false;
|
||||
string resp = Encoding.UTF8.GetString(buf, 0, n);
|
||||
return resp.Contains("pong", StringComparison.OrdinalIgnoreCase);
|
||||
using TcpClient client = new TcpClient();
|
||||
var connectTask = client.ConnectAsync(IPAddress.Loopback, port);
|
||||
if (!connectTask.Wait(ConnectTimeoutMs)) return false;
|
||||
|
||||
using NetworkStream stream = client.GetStream();
|
||||
try { client.NoDelay = true; } catch { }
|
||||
|
||||
// 1) Read handshake line (ASCII, newline-terminated)
|
||||
string handshake = ReadLineAscii(stream, 2000);
|
||||
if (string.IsNullOrEmpty(handshake) || handshake.IndexOf("FRAMING=1", StringComparison.OrdinalIgnoreCase) < 0)
|
||||
{
|
||||
UnityEngine.Debug.LogWarning("MCP for Unity: Bridge handshake missing FRAMING=1");
|
||||
return false;
|
||||
}
|
||||
|
||||
// 2) Send framed "ping"
|
||||
byte[] payload = Encoding.UTF8.GetBytes("ping");
|
||||
WriteFrame(stream, payload, FrameTimeoutMs);
|
||||
|
||||
// 3) Read framed response and check for pong
|
||||
string response = ReadFrameUtf8(stream, FrameTimeoutMs);
|
||||
bool ok = !string.IsNullOrEmpty(response) && response.IndexOf("pong", StringComparison.OrdinalIgnoreCase) >= 0;
|
||||
if (!ok)
|
||||
{
|
||||
UnityEngine.Debug.LogWarning($"MCP for Unity: Framed ping failed; response='{response}'");
|
||||
}
|
||||
return ok;
|
||||
}
|
||||
catch { return false; }
|
||||
catch (Exception ex)
|
||||
{
|
||||
UnityEngine.Debug.LogWarning($"MCP for Unity: VerifyBridgePing error: {ex.Message}");
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
// Minimal framing helpers (8-byte big-endian length prefix), blocking with timeouts
|
||||
private static void WriteFrame(NetworkStream stream, byte[] payload, int timeoutMs)
|
||||
{
|
||||
if (payload == null) throw new ArgumentNullException(nameof(payload));
|
||||
if (payload.LongLength < 1) throw new IOException("Zero-length frames are not allowed");
|
||||
byte[] header = new byte[8];
|
||||
ulong len = (ulong)payload.LongLength;
|
||||
header[0] = (byte)(len >> 56);
|
||||
header[1] = (byte)(len >> 48);
|
||||
header[2] = (byte)(len >> 40);
|
||||
header[3] = (byte)(len >> 32);
|
||||
header[4] = (byte)(len >> 24);
|
||||
header[5] = (byte)(len >> 16);
|
||||
header[6] = (byte)(len >> 8);
|
||||
header[7] = (byte)(len);
|
||||
|
||||
stream.WriteTimeout = timeoutMs;
|
||||
stream.Write(header, 0, header.Length);
|
||||
stream.Write(payload, 0, payload.Length);
|
||||
}
|
||||
|
||||
private static string ReadFrameUtf8(NetworkStream stream, int timeoutMs)
|
||||
{
|
||||
byte[] header = ReadExact(stream, 8, timeoutMs);
|
||||
ulong len = ((ulong)header[0] << 56)
|
||||
| ((ulong)header[1] << 48)
|
||||
| ((ulong)header[2] << 40)
|
||||
| ((ulong)header[3] << 32)
|
||||
| ((ulong)header[4] << 24)
|
||||
| ((ulong)header[5] << 16)
|
||||
| ((ulong)header[6] << 8)
|
||||
| header[7];
|
||||
if (len == 0UL) throw new IOException("Zero-length frames are not allowed");
|
||||
if (len > int.MaxValue) throw new IOException("Frame too large");
|
||||
byte[] payload = ReadExact(stream, (int)len, timeoutMs);
|
||||
return Encoding.UTF8.GetString(payload);
|
||||
}
|
||||
|
||||
private static byte[] ReadExact(NetworkStream stream, int count, int timeoutMs)
|
||||
{
|
||||
byte[] buffer = new byte[count];
|
||||
int offset = 0;
|
||||
stream.ReadTimeout = timeoutMs;
|
||||
while (offset < count)
|
||||
{
|
||||
int read = stream.Read(buffer, offset, count - offset);
|
||||
if (read <= 0) throw new IOException("Connection closed before reading expected bytes");
|
||||
offset += read;
|
||||
}
|
||||
return buffer;
|
||||
}
|
||||
|
||||
private static string ReadLineAscii(NetworkStream stream, int timeoutMs, int maxLen = 512)
|
||||
{
|
||||
stream.ReadTimeout = timeoutMs;
|
||||
using var ms = new MemoryStream();
|
||||
byte[] one = new byte[1];
|
||||
while (ms.Length < maxLen)
|
||||
{
|
||||
int n = stream.Read(one, 0, 1);
|
||||
if (n <= 0) break;
|
||||
if (one[0] == (byte)'\n') break;
|
||||
ms.WriteByte(one[0]);
|
||||
}
|
||||
return Encoding.ASCII.GetString(ms.ToArray());
|
||||
}
|
||||
|
||||
private void DrawClientConfigurationCompact(McpClient mcpClient)
|
||||
|
|
@ -1134,10 +1223,19 @@ namespace MCPForUnity.Editor.Windows
|
|||
}
|
||||
catch { }
|
||||
|
||||
// 1) Start from existing, only fill gaps
|
||||
string uvPath = (ValidateUvBinarySafe(existingCommand) ? existingCommand : FindUvPath());
|
||||
// 1) Start from existing, only fill gaps (prefer trusted resolver)
|
||||
string uvPath = ServerInstaller.FindUvPath();
|
||||
// Optionally trust existingCommand if it looks like uv/uv.exe
|
||||
try
|
||||
{
|
||||
var name = System.IO.Path.GetFileName((existingCommand ?? string.Empty).Trim()).ToLowerInvariant();
|
||||
if ((name == "uv" || name == "uv.exe") && ValidateUvBinarySafe(existingCommand))
|
||||
{
|
||||
uvPath = existingCommand;
|
||||
}
|
||||
}
|
||||
catch { }
|
||||
if (uvPath == null) return "UV package manager not found. Please install UV first.";
|
||||
|
||||
string serverSrc = ExtractDirectoryArg(existingArgs);
|
||||
bool serverValid = !string.IsNullOrEmpty(serverSrc)
|
||||
&& System.IO.File.Exists(System.IO.Path.Combine(serverSrc, "server.py"));
|
||||
|
|
@ -1203,51 +1301,61 @@ namespace MCPForUnity.Editor.Windows
|
|||
|
||||
string mergedJson = JsonConvert.SerializeObject(existingRoot, jsonSettings);
|
||||
|
||||
// Use a more robust atomic write pattern
|
||||
// Robust atomic write without redundant backup or race on existence
|
||||
string tmp = configPath + ".tmp";
|
||||
string backup = configPath + ".backup";
|
||||
|
||||
bool writeDone = false;
|
||||
try
|
||||
{
|
||||
// Write to temp file first
|
||||
// Write to temp file first (in same directory for atomicity)
|
||||
System.IO.File.WriteAllText(tmp, mergedJson, new System.Text.UTF8Encoding(false));
|
||||
|
||||
// Create backup of existing file if it exists
|
||||
if (System.IO.File.Exists(configPath))
|
||||
try
|
||||
{
|
||||
System.IO.File.Copy(configPath, backup, true);
|
||||
// Try atomic replace; creates 'backup' only on success (platform-dependent)
|
||||
System.IO.File.Replace(tmp, configPath, backup);
|
||||
writeDone = true;
|
||||
}
|
||||
|
||||
// Atomic move operation (more reliable than Replace on macOS)
|
||||
if (System.IO.File.Exists(configPath))
|
||||
catch (System.IO.FileNotFoundException)
|
||||
{
|
||||
System.IO.File.Delete(configPath);
|
||||
// Destination didn't exist; fall back to move
|
||||
System.IO.File.Move(tmp, configPath);
|
||||
writeDone = true;
|
||||
}
|
||||
System.IO.File.Move(tmp, configPath);
|
||||
|
||||
// Clean up backup
|
||||
if (System.IO.File.Exists(backup))
|
||||
catch (System.PlatformNotSupportedException)
|
||||
{
|
||||
System.IO.File.Delete(backup);
|
||||
// Fallback: rename existing to backup, then move tmp into place
|
||||
if (System.IO.File.Exists(configPath))
|
||||
{
|
||||
try { if (System.IO.File.Exists(backup)) System.IO.File.Delete(backup); } catch { }
|
||||
System.IO.File.Move(configPath, backup);
|
||||
}
|
||||
System.IO.File.Move(tmp, configPath);
|
||||
writeDone = true;
|
||||
}
|
||||
}
|
||||
catch (Exception ex)
|
||||
{
|
||||
// Clean up temp file
|
||||
try { if (System.IO.File.Exists(tmp)) System.IO.File.Delete(tmp); } catch { }
|
||||
// Restore backup if it exists
|
||||
try {
|
||||
if (System.IO.File.Exists(backup))
|
||||
|
||||
// If write did not complete, attempt restore from backup without deleting current file first
|
||||
try
|
||||
{
|
||||
if (!writeDone && System.IO.File.Exists(backup))
|
||||
{
|
||||
if (System.IO.File.Exists(configPath))
|
||||
{
|
||||
System.IO.File.Delete(configPath);
|
||||
}
|
||||
System.IO.File.Move(backup, configPath);
|
||||
try { System.IO.File.Copy(backup, configPath, true); } catch { }
|
||||
}
|
||||
} catch { }
|
||||
}
|
||||
catch { }
|
||||
throw new Exception($"Failed to write config file '{configPath}': {ex.Message}", ex);
|
||||
}
|
||||
finally
|
||||
{
|
||||
// Best-effort cleanup of temp
|
||||
try { if (System.IO.File.Exists(tmp)) System.IO.File.Delete(tmp); } catch { }
|
||||
// Only remove backup after a confirmed successful write
|
||||
try { if (writeDone && System.IO.File.Exists(backup)) System.IO.File.Delete(backup); } catch { }
|
||||
}
|
||||
|
||||
try
|
||||
{
|
||||
if (IsValidUv(uvPath)) UnityEditor.EditorPrefs.SetString("MCPForUnity.UvPath", uvPath);
|
||||
|
|
@ -1835,283 +1943,12 @@ namespace MCPForUnity.Editor.Windows
|
|||
|
||||
private string FindUvPath()
|
||||
{
|
||||
string uvPath = null;
|
||||
|
||||
if (RuntimeInformation.IsOSPlatform(OSPlatform.Windows))
|
||||
{
|
||||
uvPath = FindWindowsUvPath();
|
||||
}
|
||||
else
|
||||
{
|
||||
// macOS/Linux paths
|
||||
string[] possiblePaths = {
|
||||
"/Library/Frameworks/Python.framework/Versions/3.13/bin/uv",
|
||||
"/usr/local/bin/uv",
|
||||
"/opt/homebrew/bin/uv",
|
||||
"/usr/bin/uv"
|
||||
};
|
||||
|
||||
foreach (string path in possiblePaths)
|
||||
{
|
||||
if (File.Exists(path) && IsValidUvInstallation(path))
|
||||
{
|
||||
uvPath = path;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
// If not found in common locations, try to find via which command
|
||||
if (uvPath == null)
|
||||
{
|
||||
try
|
||||
{
|
||||
var psi = new ProcessStartInfo
|
||||
{
|
||||
FileName = "which",
|
||||
Arguments = "uv",
|
||||
UseShellExecute = false,
|
||||
RedirectStandardOutput = true,
|
||||
CreateNoWindow = true
|
||||
};
|
||||
|
||||
using var process = Process.Start(psi);
|
||||
string output = process.StandardOutput.ReadToEnd().Trim();
|
||||
process.WaitForExit();
|
||||
|
||||
if (!string.IsNullOrEmpty(output) && File.Exists(output) && IsValidUvInstallation(output))
|
||||
{
|
||||
uvPath = output;
|
||||
}
|
||||
}
|
||||
catch
|
||||
{
|
||||
// Ignore errors
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// If no specific path found, fall back to using 'uv' from PATH
|
||||
if (uvPath == null)
|
||||
{
|
||||
// Test if 'uv' is available in PATH by trying to run it
|
||||
string uvCommand = RuntimeInformation.IsOSPlatform(OSPlatform.Windows) ? "uv.exe" : "uv";
|
||||
if (IsValidUvInstallation(uvCommand))
|
||||
{
|
||||
uvPath = uvCommand;
|
||||
}
|
||||
}
|
||||
|
||||
if (uvPath == null)
|
||||
{
|
||||
UnityEngine.Debug.LogError("UV package manager not found! Please install UV first:\n" +
|
||||
"• macOS/Linux: curl -LsSf https://astral.sh/uv/install.sh | sh\n" +
|
||||
"• Windows: pip install uv\n" +
|
||||
"• Or visit: https://docs.astral.sh/uv/getting-started/installation");
|
||||
return null;
|
||||
}
|
||||
|
||||
return uvPath;
|
||||
try { return MCPForUnity.Editor.Helpers.ServerInstaller.FindUvPath(); } catch { return null; }
|
||||
}
|
||||
|
||||
private bool IsValidUvInstallation(string uvPath)
|
||||
{
|
||||
try
|
||||
{
|
||||
var psi = new ProcessStartInfo
|
||||
{
|
||||
FileName = uvPath,
|
||||
Arguments = "--version",
|
||||
UseShellExecute = false,
|
||||
RedirectStandardOutput = true,
|
||||
RedirectStandardError = true,
|
||||
CreateNoWindow = true
|
||||
};
|
||||
// Validation and platform-specific scanning are handled by ServerInstaller.FindUvPath()
|
||||
|
||||
using var process = Process.Start(psi);
|
||||
process.WaitForExit(5000); // 5 second timeout
|
||||
|
||||
if (process.ExitCode == 0)
|
||||
{
|
||||
string output = process.StandardOutput.ReadToEnd().Trim();
|
||||
// Basic validation - just check if it responds with version info
|
||||
// UV typically outputs "uv 0.x.x" format
|
||||
if (output.StartsWith("uv ") && output.Contains("."))
|
||||
{
|
||||
return true;
|
||||
}
|
||||
}
|
||||
|
||||
return false;
|
||||
}
|
||||
catch
|
||||
{
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
private string FindWindowsUvPath()
|
||||
{
|
||||
string appData = Environment.GetFolderPath(Environment.SpecialFolder.ApplicationData);
|
||||
string localAppData = Environment.GetFolderPath(Environment.SpecialFolder.LocalApplicationData);
|
||||
string userProfile = Environment.GetFolderPath(Environment.SpecialFolder.UserProfile);
|
||||
|
||||
// Dynamic Python version detection - check what's actually installed
|
||||
List<string> pythonVersions = new List<string>();
|
||||
|
||||
// Add common versions but also scan for any Python* directories
|
||||
string[] commonVersions = { "Python313", "Python312", "Python311", "Python310", "Python39", "Python38", "Python37" };
|
||||
pythonVersions.AddRange(commonVersions);
|
||||
|
||||
// Scan for additional Python installations
|
||||
string[] pythonBasePaths = {
|
||||
Path.Combine(appData, "Python"),
|
||||
Path.Combine(localAppData, "Programs", "Python"),
|
||||
Environment.GetFolderPath(Environment.SpecialFolder.ProgramFiles) + "\\Python",
|
||||
Environment.GetFolderPath(Environment.SpecialFolder.ProgramFilesX86) + "\\Python"
|
||||
};
|
||||
|
||||
foreach (string basePath in pythonBasePaths)
|
||||
{
|
||||
if (Directory.Exists(basePath))
|
||||
{
|
||||
try
|
||||
{
|
||||
foreach (string dir in Directory.GetDirectories(basePath, "Python*"))
|
||||
{
|
||||
string versionName = Path.GetFileName(dir);
|
||||
if (!pythonVersions.Contains(versionName))
|
||||
{
|
||||
pythonVersions.Add(versionName);
|
||||
}
|
||||
}
|
||||
}
|
||||
catch
|
||||
{
|
||||
// Ignore directory access errors
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Check Python installations for UV
|
||||
foreach (string version in pythonVersions)
|
||||
{
|
||||
string[] pythonPaths = {
|
||||
Path.Combine(appData, "Python", version, "Scripts", "uv.exe"),
|
||||
Path.Combine(localAppData, "Programs", "Python", version, "Scripts", "uv.exe"),
|
||||
Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.ProgramFiles), "Python", version, "Scripts", "uv.exe"),
|
||||
Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.ProgramFilesX86), "Python", version, "Scripts", "uv.exe")
|
||||
};
|
||||
|
||||
foreach (string uvPath in pythonPaths)
|
||||
{
|
||||
if (File.Exists(uvPath) && IsValidUvInstallation(uvPath))
|
||||
{
|
||||
return uvPath;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Check package manager installations
|
||||
string[] packageManagerPaths = {
|
||||
// Chocolatey
|
||||
Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.CommonApplicationData), "chocolatey", "lib", "uv", "tools", "uv.exe"),
|
||||
Path.Combine("C:", "ProgramData", "chocolatey", "lib", "uv", "tools", "uv.exe"),
|
||||
|
||||
// Scoop
|
||||
Path.Combine(userProfile, "scoop", "apps", "uv", "current", "uv.exe"),
|
||||
Path.Combine(userProfile, "scoop", "shims", "uv.exe"),
|
||||
|
||||
// Winget/msstore
|
||||
Path.Combine(localAppData, "Microsoft", "WinGet", "Packages", "astral-sh.uv_Microsoft.Winget.Source_8wekyb3d8bbwe", "uv.exe"),
|
||||
|
||||
// Common standalone installations
|
||||
Path.Combine(localAppData, "uv", "uv.exe"),
|
||||
Path.Combine(appData, "uv", "uv.exe"),
|
||||
Path.Combine(userProfile, ".local", "bin", "uv.exe"),
|
||||
Path.Combine(userProfile, "bin", "uv.exe"),
|
||||
|
||||
// Cargo/Rust installations
|
||||
Path.Combine(userProfile, ".cargo", "bin", "uv.exe"),
|
||||
|
||||
// Manual installations in common locations
|
||||
Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.ProgramFiles), "uv", "uv.exe"),
|
||||
Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.ProgramFilesX86), "uv", "uv.exe")
|
||||
};
|
||||
|
||||
foreach (string uvPath in packageManagerPaths)
|
||||
{
|
||||
if (File.Exists(uvPath) && IsValidUvInstallation(uvPath))
|
||||
{
|
||||
return uvPath;
|
||||
}
|
||||
}
|
||||
|
||||
// Try to find uv via where command (Windows equivalent of which)
|
||||
// Use where.exe explicitly to avoid PowerShell alias conflicts
|
||||
try
|
||||
{
|
||||
var psi = new ProcessStartInfo
|
||||
{
|
||||
FileName = "where.exe",
|
||||
Arguments = "uv",
|
||||
UseShellExecute = false,
|
||||
RedirectStandardOutput = true,
|
||||
RedirectStandardError = true,
|
||||
CreateNoWindow = true
|
||||
};
|
||||
|
||||
using var process = Process.Start(psi);
|
||||
string output = process.StandardOutput.ReadToEnd().Trim();
|
||||
process.WaitForExit();
|
||||
|
||||
if (process.ExitCode == 0 && !string.IsNullOrEmpty(output))
|
||||
{
|
||||
string[] lines = output.Split('\n');
|
||||
foreach (string line in lines)
|
||||
{
|
||||
string cleanPath = line.Trim();
|
||||
if (File.Exists(cleanPath) && IsValidUvInstallation(cleanPath))
|
||||
{
|
||||
return cleanPath;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
catch
|
||||
{
|
||||
// If where.exe fails, try PowerShell's Get-Command as fallback
|
||||
try
|
||||
{
|
||||
var psi = new ProcessStartInfo
|
||||
{
|
||||
FileName = "powershell.exe",
|
||||
Arguments = "-Command \"(Get-Command uv -ErrorAction SilentlyContinue).Source\"",
|
||||
UseShellExecute = false,
|
||||
RedirectStandardOutput = true,
|
||||
RedirectStandardError = true,
|
||||
CreateNoWindow = true
|
||||
};
|
||||
|
||||
using var process = Process.Start(psi);
|
||||
string output = process.StandardOutput.ReadToEnd().Trim();
|
||||
process.WaitForExit();
|
||||
|
||||
if (process.ExitCode == 0 && !string.IsNullOrEmpty(output) && File.Exists(output))
|
||||
{
|
||||
if (IsValidUvInstallation(output))
|
||||
{
|
||||
return output;
|
||||
}
|
||||
}
|
||||
}
|
||||
catch
|
||||
{
|
||||
// Ignore PowerShell errors too
|
||||
}
|
||||
}
|
||||
|
||||
return null; // Will fallback to using 'uv' from PATH
|
||||
}
|
||||
// Windows-specific discovery removed; use ServerInstaller.FindUvPath() instead
|
||||
|
||||
// Removed unused FindClaudeCommand
|
||||
|
||||
|
|
@ -2123,10 +1960,14 @@ namespace MCPForUnity.Editor.Windows
|
|||
string unityProjectDir = Application.dataPath;
|
||||
string projectDir = Path.GetDirectoryName(unityProjectDir);
|
||||
|
||||
// Read the global Claude config file
|
||||
string configPath = RuntimeInformation.IsOSPlatform(OSPlatform.Windows)
|
||||
? mcpClient.windowsConfigPath
|
||||
: mcpClient.linuxConfigPath;
|
||||
// Read the global Claude config file (honor macConfigPath on macOS)
|
||||
string configPath;
|
||||
if (RuntimeInformation.IsOSPlatform(OSPlatform.Windows))
|
||||
configPath = mcpClient.windowsConfigPath;
|
||||
else if (RuntimeInformation.IsOSPlatform(OSPlatform.OSX))
|
||||
configPath = string.IsNullOrEmpty(mcpClient.macConfigPath) ? mcpClient.linuxConfigPath : mcpClient.macConfigPath;
|
||||
else
|
||||
configPath = mcpClient.linuxConfigPath;
|
||||
|
||||
if (debugLogsEnabled)
|
||||
{
|
||||
|
|
|
|||
|
|
@ -119,7 +119,9 @@ namespace MCPForUnity.Editor.Windows
|
|||
else if (RuntimeInformation.IsOSPlatform(OSPlatform.OSX))
|
||||
{
|
||||
displayPath = string.IsNullOrEmpty(mcpClient.macConfigPath)
|
||||
? mcpClient.linuxConfigPath
|
||||
|
||||
? configPath
|
||||
|
||||
: mcpClient.macConfigPath;
|
||||
}
|
||||
else if (RuntimeInformation.IsOSPlatform(OSPlatform.Linux))
|
||||
|
|
|
|||
|
|
@ -17,6 +17,9 @@ class ServerConfig:
|
|||
# Connection settings
|
||||
connection_timeout: float = 60.0 # default steady-state timeout; retries use shorter timeouts
|
||||
buffer_size: int = 16 * 1024 * 1024 # 16MB buffer
|
||||
# Framed receive behavior
|
||||
framed_receive_timeout: float = 2.0 # max seconds to wait while consuming heartbeats only
|
||||
max_heartbeat_frames: int = 16 # cap heartbeat frames consumed before giving up
|
||||
|
||||
# Logging settings
|
||||
log_level: str = "INFO"
|
||||
|
|
|
|||
|
|
@ -0,0 +1,11 @@
|
|||
{
|
||||
"typeCheckingMode": "basic",
|
||||
"reportMissingImports": "none",
|
||||
"pythonVersion": "3.11",
|
||||
"executionEnvironments": [
|
||||
{
|
||||
"root": ".",
|
||||
"pythonVersion": "3.11"
|
||||
}
|
||||
]
|
||||
}
|
||||
|
|
@ -1 +1 @@
|
|||
3.0.1
|
||||
3.1.1
|
||||
|
|
|
|||
|
|
@ -1,3 +1,5 @@
|
|||
import logging
|
||||
from .manage_script_edits import register_manage_script_edits_tools
|
||||
from .manage_script import register_manage_script_tools
|
||||
from .manage_scene import register_manage_scene_tools
|
||||
from .manage_editor import register_manage_editor_tools
|
||||
|
|
@ -6,10 +8,15 @@ from .manage_asset import register_manage_asset_tools
|
|||
from .manage_shader import register_manage_shader_tools
|
||||
from .read_console import register_read_console_tools
|
||||
from .execute_menu_item import register_execute_menu_item_tools
|
||||
from .resource_tools import register_resource_tools
|
||||
|
||||
logger = logging.getLogger("mcp-for-unity-server")
|
||||
|
||||
def register_all_tools(mcp):
|
||||
"""Register all refactored tools with the MCP server."""
|
||||
print("Registering MCP for Unity Server refactored tools...")
|
||||
# Prefer the surgical edits tool so LLMs discover it first
|
||||
logger.info("Registering MCP for Unity Server refactored tools...")
|
||||
register_manage_script_edits_tools(mcp)
|
||||
register_manage_script_tools(mcp)
|
||||
register_manage_scene_tools(mcp)
|
||||
register_manage_editor_tools(mcp)
|
||||
|
|
@ -18,4 +25,6 @@ def register_all_tools(mcp):
|
|||
register_manage_shader_tools(mcp)
|
||||
register_read_console_tools(mcp)
|
||||
register_execute_menu_item_tools(mcp)
|
||||
print("MCP for Unity Server tool registration complete.")
|
||||
# Expose resource wrappers as normal tools so IDEs without resources primitive can use them
|
||||
register_resource_tools(mcp)
|
||||
logger.info("MCP for Unity Server tool registration complete.")
|
||||
|
|
|
|||
|
|
@ -1,29 +1,414 @@
|
|||
from mcp.server.fastmcp import FastMCP, Context
|
||||
from typing import Dict, Any
|
||||
from unity_connection import get_unity_connection, send_command_with_retry
|
||||
from config import config
|
||||
import time
|
||||
import os
|
||||
from typing import Dict, Any, List
|
||||
from unity_connection import send_command_with_retry
|
||||
import base64
|
||||
import os
|
||||
from urllib.parse import urlparse, unquote
|
||||
|
||||
|
||||
def register_manage_script_tools(mcp: FastMCP):
|
||||
"""Register all script management tools with the MCP server."""
|
||||
|
||||
@mcp.tool()
|
||||
def _split_uri(uri: str) -> tuple[str, str]:
|
||||
"""Split an incoming URI or path into (name, directory) suitable for Unity.
|
||||
|
||||
Rules:
|
||||
- unity://path/Assets/... → keep as Assets-relative (after decode/normalize)
|
||||
- file://... → percent-decode, normalize, strip host and leading slashes,
|
||||
then, if any 'Assets' segment exists, return path relative to that 'Assets' root.
|
||||
Otherwise, fall back to original name/dir behavior.
|
||||
- plain paths → decode/normalize separators; if they contain an 'Assets' segment,
|
||||
return relative to 'Assets'.
|
||||
"""
|
||||
raw_path: str
|
||||
if uri.startswith("unity://path/"):
|
||||
raw_path = uri[len("unity://path/") :]
|
||||
elif uri.startswith("file://"):
|
||||
parsed = urlparse(uri)
|
||||
host = (parsed.netloc or "").strip()
|
||||
p = parsed.path or ""
|
||||
# UNC: file://server/share/... -> //server/share/...
|
||||
if host and host.lower() != "localhost":
|
||||
p = f"//{host}{p}"
|
||||
# Use percent-decoded path, preserving leading slashes
|
||||
raw_path = unquote(p)
|
||||
else:
|
||||
raw_path = uri
|
||||
|
||||
# Percent-decode any residual encodings and normalize separators
|
||||
raw_path = unquote(raw_path).replace("\\", "/")
|
||||
# Strip leading slash only for Windows drive-letter forms like "/C:/..."
|
||||
if os.name == "nt" and len(raw_path) >= 3 and raw_path[0] == "/" and raw_path[2] == ":":
|
||||
raw_path = raw_path[1:]
|
||||
|
||||
# Normalize path (collapse ../, ./)
|
||||
norm = os.path.normpath(raw_path).replace("\\", "/")
|
||||
|
||||
# If an 'Assets' segment exists, compute path relative to it (case-insensitive)
|
||||
parts = [p for p in norm.split("/") if p not in ("", ".")]
|
||||
idx = next((i for i, seg in enumerate(parts) if seg.lower() == "assets"), None)
|
||||
assets_rel = "/".join(parts[idx:]) if idx is not None else None
|
||||
|
||||
effective_path = assets_rel if assets_rel else norm
|
||||
# For POSIX absolute paths outside Assets, drop the leading '/'
|
||||
# to return a clean relative-like directory (e.g., '/tmp' -> 'tmp').
|
||||
if effective_path.startswith("/"):
|
||||
effective_path = effective_path[1:]
|
||||
|
||||
name = os.path.splitext(os.path.basename(effective_path))[0]
|
||||
directory = os.path.dirname(effective_path)
|
||||
return name, directory
|
||||
|
||||
@mcp.tool(description=(
|
||||
"Apply small text edits to a C# script identified by URI.\n\n"
|
||||
"⚠️ IMPORTANT: This tool replaces EXACT character positions. Always verify content at target lines/columns BEFORE editing!\n"
|
||||
"Common mistakes:\n"
|
||||
"- Assuming what's on a line without checking\n"
|
||||
"- Using wrong line numbers (they're 1-indexed)\n"
|
||||
"- Miscounting column positions (also 1-indexed, tabs count as 1)\n\n"
|
||||
"RECOMMENDED WORKFLOW:\n"
|
||||
"1) First call resources/read with start_line/line_count to verify exact content\n"
|
||||
"2) Count columns carefully (or use find_in_file to locate patterns)\n"
|
||||
"3) Apply your edit with precise coordinates\n"
|
||||
"4) Consider script_apply_edits with anchors for safer pattern-based replacements\n\n"
|
||||
"Args:\n"
|
||||
"- uri: unity://path/Assets/... or file://... or Assets/...\n"
|
||||
"- edits: list of {startLine,startCol,endLine,endCol,newText} (1-indexed!)\n"
|
||||
"- precondition_sha256: optional SHA of current file (prevents concurrent edit conflicts)\n\n"
|
||||
"Notes:\n"
|
||||
"- Path must resolve under Assets/\n"
|
||||
"- For method/class operations, use script_apply_edits (safer, structured edits)\n"
|
||||
"- For pattern-based replacements, consider anchor operations in script_apply_edits\n"
|
||||
))
|
||||
def apply_text_edits(
|
||||
ctx: Context,
|
||||
uri: str,
|
||||
edits: List[Dict[str, Any]],
|
||||
precondition_sha256: str | None = None,
|
||||
strict: bool | None = None,
|
||||
options: Dict[str, Any] | None = None,
|
||||
) -> Dict[str, Any]:
|
||||
"""Apply small text edits to a C# script identified by URI."""
|
||||
name, directory = _split_uri(uri)
|
||||
|
||||
# Normalize common aliases/misuses for resilience:
|
||||
# - Accept LSP-style range objects: {range:{start:{line,character}, end:{...}}, newText|text}
|
||||
# - Accept index ranges as a 2-int array: {range:[startIndex,endIndex], text}
|
||||
# If normalization is required, read current contents to map indices -> 1-based line/col.
|
||||
def _needs_normalization(arr: List[Dict[str, Any]]) -> bool:
|
||||
for e in arr or []:
|
||||
if ("startLine" not in e) or ("startCol" not in e) or ("endLine" not in e) or ("endCol" not in e) or ("newText" not in e and "text" in e):
|
||||
return True
|
||||
return False
|
||||
|
||||
normalized_edits: List[Dict[str, Any]] = []
|
||||
warnings: List[str] = []
|
||||
if _needs_normalization(edits):
|
||||
# Read file to support index->line/col conversion when needed
|
||||
read_resp = send_command_with_retry("manage_script", {
|
||||
"action": "read",
|
||||
"name": name,
|
||||
"path": directory,
|
||||
})
|
||||
if not (isinstance(read_resp, dict) and read_resp.get("success")):
|
||||
return read_resp if isinstance(read_resp, dict) else {"success": False, "message": str(read_resp)}
|
||||
data = read_resp.get("data", {})
|
||||
contents = data.get("contents")
|
||||
if not contents and data.get("contentsEncoded"):
|
||||
try:
|
||||
contents = base64.b64decode(data.get("encodedContents", "").encode("utf-8")).decode("utf-8", "replace")
|
||||
except Exception:
|
||||
contents = contents or ""
|
||||
|
||||
# Helper to map 0-based character index to 1-based line/col
|
||||
def line_col_from_index(idx: int) -> tuple[int, int]:
|
||||
if idx <= 0:
|
||||
return 1, 1
|
||||
# Count lines up to idx and position within line
|
||||
nl_count = contents.count("\n", 0, idx)
|
||||
line = nl_count + 1
|
||||
last_nl = contents.rfind("\n", 0, idx)
|
||||
col = (idx - (last_nl + 1)) + 1 if last_nl >= 0 else idx + 1
|
||||
return line, col
|
||||
|
||||
for e in edits or []:
|
||||
e2 = dict(e)
|
||||
# Map text->newText if needed
|
||||
if "newText" not in e2 and "text" in e2:
|
||||
e2["newText"] = e2.pop("text")
|
||||
|
||||
if "startLine" in e2 and "startCol" in e2 and "endLine" in e2 and "endCol" in e2:
|
||||
# Guard: explicit fields must be 1-based.
|
||||
zero_based = False
|
||||
for k in ("startLine","startCol","endLine","endCol"):
|
||||
try:
|
||||
if int(e2.get(k, 1)) < 1:
|
||||
zero_based = True
|
||||
except Exception:
|
||||
pass
|
||||
if zero_based:
|
||||
if strict:
|
||||
return {"success": False, "code": "zero_based_explicit_fields", "message": "Explicit line/col fields are 1-based; received zero-based.", "data": {"normalizedEdits": normalized_edits}}
|
||||
# Normalize by clamping to 1 and warn
|
||||
for k in ("startLine","startCol","endLine","endCol"):
|
||||
try:
|
||||
if int(e2.get(k, 1)) < 1:
|
||||
e2[k] = 1
|
||||
except Exception:
|
||||
pass
|
||||
warnings.append("zero_based_explicit_fields_normalized")
|
||||
normalized_edits.append(e2)
|
||||
continue
|
||||
|
||||
rng = e2.get("range")
|
||||
if isinstance(rng, dict):
|
||||
# LSP style: 0-based
|
||||
s = rng.get("start", {})
|
||||
t = rng.get("end", {})
|
||||
e2["startLine"] = int(s.get("line", 0)) + 1
|
||||
e2["startCol"] = int(s.get("character", 0)) + 1
|
||||
e2["endLine"] = int(t.get("line", 0)) + 1
|
||||
e2["endCol"] = int(t.get("character", 0)) + 1
|
||||
e2.pop("range", None)
|
||||
normalized_edits.append(e2)
|
||||
continue
|
||||
if isinstance(rng, (list, tuple)) and len(rng) == 2:
|
||||
try:
|
||||
a = int(rng[0])
|
||||
b = int(rng[1])
|
||||
if b < a:
|
||||
a, b = b, a
|
||||
sl, sc = line_col_from_index(a)
|
||||
el, ec = line_col_from_index(b)
|
||||
e2["startLine"] = sl
|
||||
e2["startCol"] = sc
|
||||
e2["endLine"] = el
|
||||
e2["endCol"] = ec
|
||||
e2.pop("range", None)
|
||||
normalized_edits.append(e2)
|
||||
continue
|
||||
except Exception:
|
||||
pass
|
||||
# Could not normalize this edit
|
||||
return {
|
||||
"success": False,
|
||||
"code": "missing_field",
|
||||
"message": "apply_text_edits requires startLine/startCol/endLine/endCol/newText or a normalizable 'range'",
|
||||
"data": {"expected": ["startLine","startCol","endLine","endCol","newText"], "got": e}
|
||||
}
|
||||
else:
|
||||
# Even when edits appear already in explicit form, validate 1-based coordinates.
|
||||
normalized_edits = []
|
||||
for e in edits or []:
|
||||
e2 = dict(e)
|
||||
has_all = all(k in e2 for k in ("startLine","startCol","endLine","endCol"))
|
||||
if has_all:
|
||||
zero_based = False
|
||||
for k in ("startLine","startCol","endLine","endCol"):
|
||||
try:
|
||||
if int(e2.get(k, 1)) < 1:
|
||||
zero_based = True
|
||||
except Exception:
|
||||
pass
|
||||
if zero_based:
|
||||
if strict:
|
||||
return {"success": False, "code": "zero_based_explicit_fields", "message": "Explicit line/col fields are 1-based; received zero-based.", "data": {"normalizedEdits": [e2]}}
|
||||
for k in ("startLine","startCol","endLine","endCol"):
|
||||
try:
|
||||
if int(e2.get(k, 1)) < 1:
|
||||
e2[k] = 1
|
||||
except Exception:
|
||||
pass
|
||||
if "zero_based_explicit_fields_normalized" not in warnings:
|
||||
warnings.append("zero_based_explicit_fields_normalized")
|
||||
normalized_edits.append(e2)
|
||||
|
||||
# Preflight: detect overlapping ranges among normalized line/col spans
|
||||
def _pos_tuple(e: Dict[str, Any], key_start: bool) -> tuple[int, int]:
|
||||
return (
|
||||
int(e.get("startLine", 1)) if key_start else int(e.get("endLine", 1)),
|
||||
int(e.get("startCol", 1)) if key_start else int(e.get("endCol", 1)),
|
||||
)
|
||||
|
||||
def _le(a: tuple[int, int], b: tuple[int, int]) -> bool:
|
||||
return a[0] < b[0] or (a[0] == b[0] and a[1] <= b[1])
|
||||
|
||||
# Consider only true replace ranges (non-zero length). Pure insertions (zero-width) don't overlap.
|
||||
spans = []
|
||||
for e in normalized_edits or []:
|
||||
try:
|
||||
s = _pos_tuple(e, True)
|
||||
t = _pos_tuple(e, False)
|
||||
if s != t:
|
||||
spans.append((s, t))
|
||||
except Exception:
|
||||
# If coordinates missing or invalid, let the server validate later
|
||||
pass
|
||||
|
||||
if spans:
|
||||
spans_sorted = sorted(spans, key=lambda p: (p[0][0], p[0][1]))
|
||||
for i in range(1, len(spans_sorted)):
|
||||
prev_end = spans_sorted[i-1][1]
|
||||
curr_start = spans_sorted[i][0]
|
||||
# Overlap if prev_end > curr_start (strict), i.e., not prev_end <= curr_start
|
||||
if not _le(prev_end, curr_start):
|
||||
conflicts = [{
|
||||
"startA": {"line": spans_sorted[i-1][0][0], "col": spans_sorted[i-1][0][1]},
|
||||
"endA": {"line": spans_sorted[i-1][1][0], "col": spans_sorted[i-1][1][1]},
|
||||
"startB": {"line": spans_sorted[i][0][0], "col": spans_sorted[i][0][1]},
|
||||
"endB": {"line": spans_sorted[i][1][0], "col": spans_sorted[i][1][1]},
|
||||
}]
|
||||
return {"success": False, "code": "overlap", "data": {"status": "overlap", "conflicts": conflicts}}
|
||||
|
||||
# Note: Do not auto-compute precondition if missing; callers should supply it
|
||||
# via mcp__unity__get_sha or a prior read. This avoids hidden extra calls and
|
||||
# preserves existing call-count expectations in clients/tests.
|
||||
|
||||
# Default options: for multi-span batches, prefer atomic to avoid mid-apply imbalance
|
||||
opts: Dict[str, Any] = dict(options or {})
|
||||
try:
|
||||
if len(normalized_edits) > 1 and "applyMode" not in opts:
|
||||
opts["applyMode"] = "atomic"
|
||||
except Exception:
|
||||
pass
|
||||
# Support optional debug preview for span-by-span simulation without write
|
||||
if opts.get("debug_preview"):
|
||||
try:
|
||||
import difflib
|
||||
# Apply locally to preview final result
|
||||
lines = []
|
||||
# Build an indexable original from a read if we normalized from read; otherwise skip
|
||||
prev = ""
|
||||
# We cannot guarantee file contents here without a read; return normalized spans only
|
||||
return {
|
||||
"success": True,
|
||||
"message": "Preview only (no write)",
|
||||
"data": {
|
||||
"normalizedEdits": normalized_edits,
|
||||
"preview": True
|
||||
}
|
||||
}
|
||||
except Exception as e:
|
||||
return {"success": False, "code": "preview_failed", "message": f"debug_preview failed: {e}", "data": {"normalizedEdits": normalized_edits}}
|
||||
|
||||
params = {
|
||||
"action": "apply_text_edits",
|
||||
"name": name,
|
||||
"path": directory,
|
||||
"edits": normalized_edits,
|
||||
"precondition_sha256": precondition_sha256,
|
||||
"options": opts,
|
||||
}
|
||||
params = {k: v for k, v in params.items() if v is not None}
|
||||
resp = send_command_with_retry("manage_script", params)
|
||||
if isinstance(resp, dict):
|
||||
data = resp.setdefault("data", {})
|
||||
data.setdefault("normalizedEdits", normalized_edits)
|
||||
if warnings:
|
||||
data.setdefault("warnings", warnings)
|
||||
return resp
|
||||
return {"success": False, "message": str(resp)}
|
||||
|
||||
@mcp.tool(description=(
|
||||
"Create a new C# script at the given project path.\n\n"
|
||||
"Args: path (e.g., 'Assets/Scripts/My.cs'), contents (string), script_type, namespace.\n"
|
||||
"Rules: path must be under Assets/. Contents will be Base64-encoded over transport.\n"
|
||||
))
|
||||
def create_script(
|
||||
ctx: Context,
|
||||
path: str,
|
||||
contents: str = "",
|
||||
script_type: str | None = None,
|
||||
namespace: str | None = None,
|
||||
) -> Dict[str, Any]:
|
||||
"""Create a new C# script at the given path."""
|
||||
name = os.path.splitext(os.path.basename(path))[0]
|
||||
directory = os.path.dirname(path)
|
||||
# Local validation to avoid round-trips on obviously bad input
|
||||
norm_path = os.path.normpath((path or "").replace("\\", "/")).replace("\\", "/")
|
||||
if not directory or directory.split("/")[0].lower() != "assets":
|
||||
return {"success": False, "code": "path_outside_assets", "message": f"path must be under 'Assets/'; got '{path}'."}
|
||||
if ".." in norm_path.split("/") or norm_path.startswith("/"):
|
||||
return {"success": False, "code": "bad_path", "message": "path must not contain traversal or be absolute."}
|
||||
if not name:
|
||||
return {"success": False, "code": "bad_path", "message": "path must include a script file name."}
|
||||
if not norm_path.lower().endswith(".cs"):
|
||||
return {"success": False, "code": "bad_extension", "message": "script file must end with .cs."}
|
||||
params: Dict[str, Any] = {
|
||||
"action": "create",
|
||||
"name": name,
|
||||
"path": directory,
|
||||
"namespace": namespace,
|
||||
"scriptType": script_type,
|
||||
}
|
||||
if contents:
|
||||
params["encodedContents"] = base64.b64encode(contents.encode("utf-8")).decode("utf-8")
|
||||
params["contentsEncoded"] = True
|
||||
params = {k: v for k, v in params.items() if v is not None}
|
||||
resp = send_command_with_retry("manage_script", params)
|
||||
return resp if isinstance(resp, dict) else {"success": False, "message": str(resp)}
|
||||
|
||||
@mcp.tool(description=(
|
||||
"Delete a C# script by URI or Assets-relative path.\n\n"
|
||||
"Args: uri (unity://path/... or file://... or Assets/...).\n"
|
||||
"Rules: Target must resolve under Assets/.\n"
|
||||
))
|
||||
def delete_script(ctx: Context, uri: str) -> Dict[str, Any]:
|
||||
"""Delete a C# script by URI."""
|
||||
name, directory = _split_uri(uri)
|
||||
if not directory or directory.split("/")[0].lower() != "assets":
|
||||
return {"success": False, "code": "path_outside_assets", "message": "URI must resolve under 'Assets/'."}
|
||||
params = {"action": "delete", "name": name, "path": directory}
|
||||
resp = send_command_with_retry("manage_script", params)
|
||||
return resp if isinstance(resp, dict) else {"success": False, "message": str(resp)}
|
||||
|
||||
@mcp.tool(description=(
|
||||
"Validate a C# script and return diagnostics.\n\n"
|
||||
"Args: uri, level=('basic'|'standard').\n"
|
||||
"- basic: quick syntax checks.\n"
|
||||
"- standard: deeper checks (performance hints, common pitfalls).\n"
|
||||
))
|
||||
def validate_script(
|
||||
ctx: Context, uri: str, level: str = "basic"
|
||||
) -> Dict[str, Any]:
|
||||
"""Validate a C# script and return diagnostics."""
|
||||
name, directory = _split_uri(uri)
|
||||
if not directory or directory.split("/")[0].lower() != "assets":
|
||||
return {"success": False, "code": "path_outside_assets", "message": "URI must resolve under 'Assets/'."}
|
||||
if level not in ("basic", "standard"):
|
||||
return {"success": False, "code": "bad_level", "message": "level must be 'basic' or 'standard'."}
|
||||
params = {
|
||||
"action": "validate",
|
||||
"name": name,
|
||||
"path": directory,
|
||||
"level": level,
|
||||
}
|
||||
resp = send_command_with_retry("manage_script", params)
|
||||
return resp if isinstance(resp, dict) else {"success": False, "message": str(resp)}
|
||||
|
||||
@mcp.tool(description=(
|
||||
"Compatibility router for legacy script operations.\n\n"
|
||||
"Actions: create|read|delete (update is routed to apply_text_edits with precondition).\n"
|
||||
"Args: name (no .cs), path (Assets/...), contents (for create), script_type, namespace.\n"
|
||||
"Notes: prefer apply_text_edits (ranges) or script_apply_edits (structured) for edits.\n"
|
||||
))
|
||||
def manage_script(
|
||||
ctx: Context,
|
||||
action: str,
|
||||
name: str,
|
||||
path: str,
|
||||
contents: str,
|
||||
script_type: str,
|
||||
namespace: str
|
||||
contents: str = "",
|
||||
script_type: str | None = None,
|
||||
namespace: str | None = None,
|
||||
) -> Dict[str, Any]:
|
||||
"""Manages C# scripts in Unity (create, read, update, delete).
|
||||
Make reference variables public for easier access in the Unity Editor.
|
||||
"""Compatibility router for legacy script operations.
|
||||
|
||||
IMPORTANT:
|
||||
- Direct file reads should use resources/read.
|
||||
- Edits should use apply_text_edits.
|
||||
|
||||
Args:
|
||||
action: Operation ('create', 'read', 'update', 'delete').
|
||||
action: Operation ('create', 'read', 'delete').
|
||||
name: Script name (no .cs extension).
|
||||
path: Asset path (default: "Assets/").
|
||||
contents: C# code for 'create'/'update'.
|
||||
|
|
@ -34,42 +419,143 @@ def register_manage_script_tools(mcp: FastMCP):
|
|||
Dictionary with results ('success', 'message', 'data').
|
||||
"""
|
||||
try:
|
||||
# Graceful migration for legacy 'update': route to apply_text_edits (whole-file replace)
|
||||
if action == 'update':
|
||||
try:
|
||||
# 1) Read current contents to compute end range and precondition
|
||||
read_resp = send_command_with_retry("manage_script", {
|
||||
"action": "read",
|
||||
"name": name,
|
||||
"path": path,
|
||||
})
|
||||
if not (isinstance(read_resp, dict) and read_resp.get("success")):
|
||||
return {"success": False, "code": "deprecated_update", "message": "Use apply_text_edits; automatic migration failed to read current file."}
|
||||
data = read_resp.get("data", {})
|
||||
current = data.get("contents")
|
||||
if not current and data.get("contentsEncoded"):
|
||||
current = base64.b64decode(data.get("encodedContents", "").encode("utf-8")).decode("utf-8", "replace")
|
||||
if current is None:
|
||||
return {"success": False, "code": "deprecated_update", "message": "Use apply_text_edits; current file read returned no contents."}
|
||||
|
||||
# 2) Compute whole-file range (1-based, end exclusive) and SHA
|
||||
import hashlib as _hashlib
|
||||
old_lines = current.splitlines(keepends=True)
|
||||
end_line = len(old_lines) + 1
|
||||
sha = _hashlib.sha256(current.encode("utf-8")).hexdigest()
|
||||
|
||||
# 3) Apply single whole-file text edit with provided 'contents'
|
||||
edits = [{
|
||||
"startLine": 1,
|
||||
"startCol": 1,
|
||||
"endLine": end_line,
|
||||
"endCol": 1,
|
||||
"newText": contents or "",
|
||||
}]
|
||||
route_params = {
|
||||
"action": "apply_text_edits",
|
||||
"name": name,
|
||||
"path": path,
|
||||
"edits": edits,
|
||||
"precondition_sha256": sha,
|
||||
"options": {"refresh": "immediate", "validate": "standard"},
|
||||
}
|
||||
# Preflight size vs. default cap (256 KiB) to avoid opaque server errors
|
||||
try:
|
||||
import json as _json
|
||||
payload_bytes = len(_json.dumps({"edits": edits}, ensure_ascii=False).encode("utf-8"))
|
||||
if payload_bytes > 256 * 1024:
|
||||
return {"success": False, "code": "payload_too_large", "message": f"Edit payload {payload_bytes} bytes exceeds 256 KiB cap; try structured ops or chunking."}
|
||||
except Exception:
|
||||
pass
|
||||
routed = send_command_with_retry("manage_script", route_params)
|
||||
if isinstance(routed, dict):
|
||||
routed.setdefault("message", "Routed legacy update to apply_text_edits")
|
||||
return routed
|
||||
return {"success": False, "message": str(routed)}
|
||||
except Exception as e:
|
||||
return {"success": False, "code": "deprecated_update", "message": f"Use apply_text_edits; migration error: {e}"}
|
||||
|
||||
# Prepare parameters for Unity
|
||||
params = {
|
||||
"action": action,
|
||||
"name": name,
|
||||
"path": path,
|
||||
"namespace": namespace,
|
||||
"scriptType": script_type
|
||||
"scriptType": script_type,
|
||||
}
|
||||
|
||||
# Base64 encode the contents if they exist to avoid JSON escaping issues
|
||||
if contents is not None:
|
||||
if action in ['create', 'update']:
|
||||
# Encode content for safer transmission
|
||||
if contents:
|
||||
if action == 'create':
|
||||
params["encodedContents"] = base64.b64encode(contents.encode('utf-8')).decode('utf-8')
|
||||
params["contentsEncoded"] = True
|
||||
else:
|
||||
params["contents"] = contents
|
||||
|
||||
# Remove None values so they don't get sent as null
|
||||
params = {k: v for k, v in params.items() if v is not None}
|
||||
|
||||
# Send command via centralized retry helper
|
||||
response = send_command_with_retry("manage_script", params)
|
||||
|
||||
# Process response from Unity
|
||||
if isinstance(response, dict) and response.get("success"):
|
||||
# If the response contains base64 encoded content, decode it
|
||||
if response.get("data", {}).get("contentsEncoded"):
|
||||
decoded_contents = base64.b64decode(response["data"]["encodedContents"]).decode('utf-8')
|
||||
response["data"]["contents"] = decoded_contents
|
||||
del response["data"]["encodedContents"]
|
||||
del response["data"]["contentsEncoded"]
|
||||
if isinstance(response, dict):
|
||||
if response.get("success"):
|
||||
if response.get("data", {}).get("contentsEncoded"):
|
||||
decoded_contents = base64.b64decode(response["data"]["encodedContents"]).decode('utf-8')
|
||||
response["data"]["contents"] = decoded_contents
|
||||
del response["data"]["encodedContents"]
|
||||
del response["data"]["contentsEncoded"]
|
||||
|
||||
return {"success": True, "message": response.get("message", "Operation successful."), "data": response.get("data")}
|
||||
return response if isinstance(response, dict) else {"success": False, "message": str(response)}
|
||||
return {
|
||||
"success": True,
|
||||
"message": response.get("message", "Operation successful."),
|
||||
"data": response.get("data"),
|
||||
}
|
||||
return response
|
||||
|
||||
return {"success": False, "message": str(response)}
|
||||
|
||||
except Exception as e:
|
||||
# Handle Python-side errors (e.g., connection issues)
|
||||
return {"success": False, "message": f"Python error managing script: {str(e)}"}
|
||||
return {
|
||||
"success": False,
|
||||
"message": f"Python error managing script: {str(e)}",
|
||||
}
|
||||
|
||||
@mcp.tool(description=(
|
||||
"Get manage_script capabilities (supported ops, limits, and guards).\n\n"
|
||||
"Returns:\n- ops: list of supported structured ops\n- text_ops: list of supported text ops\n- max_edit_payload_bytes: server edit payload cap\n- guards: header/using guard enabled flag\n"
|
||||
))
|
||||
def manage_script_capabilities(ctx: Context) -> Dict[str, Any]:
|
||||
try:
|
||||
# Keep in sync with server/Editor ManageScript implementation
|
||||
ops = [
|
||||
"replace_class","delete_class","replace_method","delete_method",
|
||||
"insert_method","anchor_insert","anchor_delete","anchor_replace"
|
||||
]
|
||||
text_ops = ["replace_range","regex_replace","prepend","append"]
|
||||
# Match ManageScript.MaxEditPayloadBytes if exposed; hardcode a sensible default fallback
|
||||
max_edit_payload_bytes = 256 * 1024
|
||||
guards = {"using_guard": True}
|
||||
extras = {"get_sha": True}
|
||||
return {"success": True, "data": {
|
||||
"ops": ops,
|
||||
"text_ops": text_ops,
|
||||
"max_edit_payload_bytes": max_edit_payload_bytes,
|
||||
"guards": guards,
|
||||
"extras": extras,
|
||||
}}
|
||||
except Exception as e:
|
||||
return {"success": False, "error": f"capabilities error: {e}"}
|
||||
|
||||
@mcp.tool(description=(
|
||||
"Get SHA256 and metadata for a Unity C# script without returning file contents.\n\n"
|
||||
"Args: uri (unity://path/Assets/... or file://... or Assets/...).\n"
|
||||
"Returns: {sha256, lengthBytes, lastModifiedUtc, uri, path}."
|
||||
))
|
||||
def get_sha(ctx: Context, uri: str) -> Dict[str, Any]:
|
||||
"""Return SHA256 and basic metadata for a script."""
|
||||
try:
|
||||
name, directory = _split_uri(uri)
|
||||
params = {"action": "get_sha", "name": name, "path": directory}
|
||||
resp = send_command_with_retry("manage_script", params)
|
||||
return resp if isinstance(resp, dict) else {"success": False, "message": str(resp)}
|
||||
except Exception as e:
|
||||
return {"success": False, "message": f"get_sha error: {e}"}
|
||||
|
|
|
|||
|
|
@ -0,0 +1,833 @@
|
|||
from mcp.server.fastmcp import FastMCP, Context
|
||||
from typing import Dict, Any, List, Tuple
|
||||
import base64
|
||||
import re
|
||||
from unity_connection import send_command_with_retry
|
||||
|
||||
|
||||
def _apply_edits_locally(original_text: str, edits: List[Dict[str, Any]]) -> str:
|
||||
text = original_text
|
||||
for edit in edits or []:
|
||||
op = (
|
||||
(edit.get("op")
|
||||
or edit.get("operation")
|
||||
or edit.get("type")
|
||||
or edit.get("mode")
|
||||
or "")
|
||||
.strip()
|
||||
.lower()
|
||||
)
|
||||
|
||||
if not op:
|
||||
allowed = "anchor_insert, prepend, append, replace_range, regex_replace"
|
||||
raise RuntimeError(
|
||||
f"op is required; allowed: {allowed}. Use 'op' (aliases accepted: type/mode/operation)."
|
||||
)
|
||||
|
||||
if op == "prepend":
|
||||
prepend_text = edit.get("text", "")
|
||||
text = (prepend_text if prepend_text.endswith("\n") else prepend_text + "\n") + text
|
||||
elif op == "append":
|
||||
append_text = edit.get("text", "")
|
||||
if not text.endswith("\n"):
|
||||
text += "\n"
|
||||
text += append_text
|
||||
if not text.endswith("\n"):
|
||||
text += "\n"
|
||||
elif op == "anchor_insert":
|
||||
anchor = edit.get("anchor", "")
|
||||
position = (edit.get("position") or "before").lower()
|
||||
insert_text = edit.get("text", "")
|
||||
flags = re.MULTILINE | (re.IGNORECASE if edit.get("ignore_case") else 0)
|
||||
m = re.search(anchor, text, flags)
|
||||
if not m:
|
||||
if edit.get("allow_noop", True):
|
||||
continue
|
||||
raise RuntimeError(f"anchor not found: {anchor}")
|
||||
idx = m.start() if position == "before" else m.end()
|
||||
text = text[:idx] + insert_text + text[idx:]
|
||||
elif op == "replace_range":
|
||||
start_line = int(edit.get("startLine", 1))
|
||||
start_col = int(edit.get("startCol", 1))
|
||||
end_line = int(edit.get("endLine", start_line))
|
||||
end_col = int(edit.get("endCol", 1))
|
||||
replacement = edit.get("text", "")
|
||||
lines = text.splitlines(keepends=True)
|
||||
max_line = len(lines) + 1 # 1-based, exclusive end
|
||||
if (start_line < 1 or end_line < start_line or end_line > max_line
|
||||
or start_col < 1 or end_col < 1):
|
||||
raise RuntimeError("replace_range out of bounds")
|
||||
def index_of(line: int, col: int) -> int:
|
||||
if line <= len(lines):
|
||||
return sum(len(l) for l in lines[: line - 1]) + (col - 1)
|
||||
return sum(len(l) for l in lines)
|
||||
a = index_of(start_line, start_col)
|
||||
b = index_of(end_line, end_col)
|
||||
text = text[:a] + replacement + text[b:]
|
||||
elif op == "regex_replace":
|
||||
pattern = edit.get("pattern", "")
|
||||
repl = edit.get("replacement", "")
|
||||
# Translate $n backrefs (our input) to Python \g<n>
|
||||
repl_py = re.sub(r"\$(\d+)", r"\\g<\1>", repl)
|
||||
count = int(edit.get("count", 0)) # 0 = replace all
|
||||
flags = re.MULTILINE
|
||||
if edit.get("ignore_case"):
|
||||
flags |= re.IGNORECASE
|
||||
text = re.sub(pattern, repl_py, text, count=count, flags=flags)
|
||||
else:
|
||||
allowed = "anchor_insert, prepend, append, replace_range, regex_replace"
|
||||
raise RuntimeError(f"unknown edit op: {op}; allowed: {allowed}. Use 'op' (aliases accepted: type/mode/operation).")
|
||||
return text
|
||||
|
||||
|
||||
def _infer_class_name(script_name: str) -> str:
|
||||
# Default to script name as class name (common Unity pattern)
|
||||
return (script_name or "").strip()
|
||||
|
||||
|
||||
def _extract_code_after(keyword: str, request: str) -> str:
|
||||
# Deprecated with NL removal; retained as no-op for compatibility
|
||||
idx = request.lower().find(keyword)
|
||||
if idx >= 0:
|
||||
return request[idx + len(keyword):].strip()
|
||||
return ""
|
||||
def _is_structurally_balanced(text: str) -> bool:
|
||||
"""Lightweight delimiter balance check for braces/paren/brackets.
|
||||
Not a full parser; used to preflight destructive regex deletes.
|
||||
"""
|
||||
brace = paren = bracket = 0
|
||||
in_str = in_chr = False
|
||||
esc = False
|
||||
i = 0
|
||||
n = len(text)
|
||||
while i < n:
|
||||
c = text[i]
|
||||
nxt = text[i+1] if i+1 < n else ''
|
||||
if in_str:
|
||||
if not esc and c == '"':
|
||||
in_str = False
|
||||
esc = (not esc and c == '\\')
|
||||
i += 1
|
||||
continue
|
||||
if in_chr:
|
||||
if not esc and c == "'":
|
||||
in_chr = False
|
||||
esc = (not esc and c == '\\')
|
||||
i += 1
|
||||
continue
|
||||
# comments
|
||||
if c == '/' and nxt == '/':
|
||||
# skip to EOL
|
||||
i = text.find('\n', i)
|
||||
if i == -1:
|
||||
break
|
||||
i += 1
|
||||
continue
|
||||
if c == '/' and nxt == '*':
|
||||
j = text.find('*/', i+2)
|
||||
i = (j + 2) if j != -1 else n
|
||||
continue
|
||||
if c == '"':
|
||||
in_str = True; esc = False; i += 1; continue
|
||||
if c == "'":
|
||||
in_chr = True; esc = False; i += 1; continue
|
||||
if c == '{': brace += 1
|
||||
elif c == '}': brace -= 1
|
||||
elif c == '(': paren += 1
|
||||
elif c == ')': paren -= 1
|
||||
elif c == '[': bracket += 1
|
||||
elif c == ']': bracket -= 1
|
||||
if brace < 0 or paren < 0 or bracket < 0:
|
||||
return False
|
||||
i += 1
|
||||
return brace == 0 and paren == 0 and bracket == 0
|
||||
|
||||
|
||||
|
||||
def _normalize_script_locator(name: str, path: str) -> Tuple[str, str]:
|
||||
"""Best-effort normalization of script "name" and "path".
|
||||
|
||||
Accepts any of:
|
||||
- name = "SmartReach", path = "Assets/Scripts/Interaction"
|
||||
- name = "SmartReach.cs", path = "Assets/Scripts/Interaction"
|
||||
- name = "Assets/Scripts/Interaction/SmartReach.cs", path = ""
|
||||
- path = "Assets/Scripts/Interaction/SmartReach.cs" (name empty)
|
||||
- name or path using uri prefixes: unity://path/..., file://...
|
||||
- accidental duplicates like "Assets/.../SmartReach.cs/SmartReach.cs"
|
||||
|
||||
Returns (name_without_extension, directory_path_under_Assets).
|
||||
"""
|
||||
n = (name or "").strip()
|
||||
p = (path or "").strip()
|
||||
|
||||
def strip_prefix(s: str) -> str:
|
||||
if s.startswith("unity://path/"):
|
||||
return s[len("unity://path/"):]
|
||||
if s.startswith("file://"):
|
||||
return s[len("file://"):]
|
||||
return s
|
||||
|
||||
def collapse_duplicate_tail(s: str) -> str:
|
||||
# Collapse trailing "/X.cs/X.cs" to "/X.cs"
|
||||
parts = s.split("/")
|
||||
if len(parts) >= 2 and parts[-1] == parts[-2]:
|
||||
parts = parts[:-1]
|
||||
return "/".join(parts)
|
||||
|
||||
# Prefer a full path if provided in either field
|
||||
candidate = ""
|
||||
for v in (n, p):
|
||||
v2 = strip_prefix(v)
|
||||
if v2.endswith(".cs") or v2.startswith("Assets/"):
|
||||
candidate = v2
|
||||
break
|
||||
|
||||
if candidate:
|
||||
candidate = collapse_duplicate_tail(candidate)
|
||||
# If a directory was passed in path and file in name, join them
|
||||
if not candidate.endswith(".cs") and n.endswith(".cs"):
|
||||
v2 = strip_prefix(n)
|
||||
candidate = (candidate.rstrip("/") + "/" + v2.split("/")[-1])
|
||||
if candidate.endswith(".cs"):
|
||||
parts = candidate.split("/")
|
||||
file_name = parts[-1]
|
||||
dir_path = "/".join(parts[:-1]) if len(parts) > 1 else "Assets"
|
||||
base = file_name[:-3] if file_name.lower().endswith(".cs") else file_name
|
||||
return base, dir_path
|
||||
|
||||
# Fall back: remove extension from name if present and return given path
|
||||
base_name = n[:-3] if n.lower().endswith(".cs") else n
|
||||
return base_name, (p or "Assets")
|
||||
|
||||
|
||||
def _with_norm(resp: Dict[str, Any] | Any, edits: List[Dict[str, Any]], routing: str | None = None) -> Dict[str, Any] | Any:
|
||||
if not isinstance(resp, dict):
|
||||
return resp
|
||||
data = resp.setdefault("data", {})
|
||||
data.setdefault("normalizedEdits", edits)
|
||||
if routing:
|
||||
data["routing"] = routing
|
||||
return resp
|
||||
|
||||
|
||||
def _err(code: str, message: str, *, expected: Dict[str, Any] | None = None, rewrite: Dict[str, Any] | None = None,
|
||||
normalized: List[Dict[str, Any]] | None = None, routing: str | None = None, extra: Dict[str, Any] | None = None) -> Dict[str, Any]:
|
||||
payload: Dict[str, Any] = {"success": False, "code": code, "message": message}
|
||||
data: Dict[str, Any] = {}
|
||||
if expected:
|
||||
data["expected"] = expected
|
||||
if rewrite:
|
||||
data["rewrite_suggestion"] = rewrite
|
||||
if normalized is not None:
|
||||
data["normalizedEdits"] = normalized
|
||||
if routing:
|
||||
data["routing"] = routing
|
||||
if extra:
|
||||
data.update(extra)
|
||||
if data:
|
||||
payload["data"] = data
|
||||
return payload
|
||||
|
||||
# Natural-language parsing removed; clients should send structured edits.
|
||||
|
||||
|
||||
def register_manage_script_edits_tools(mcp: FastMCP):
|
||||
@mcp.tool(description=(
|
||||
"Structured C# edits (methods/classes) with safer boundaries — prefer this over raw text.\n\n"
|
||||
"Best practices:\n"
|
||||
"- Prefer anchor_* ops for pattern-based insert/replace near stable markers\n"
|
||||
"- Use replace_method/delete_method for whole-method changes (keeps signatures balanced)\n"
|
||||
"- Avoid whole-file regex deletes; validators will guard unbalanced braces\n"
|
||||
"- For tail insertions, prefer anchor/regex_replace on final brace (class closing)\n"
|
||||
"- Pass options.validate='standard' for structural checks; 'relaxed' for interior-only edits\n\n"
|
||||
"Canonical fields (use these exact keys):\n"
|
||||
"- op: replace_method | insert_method | delete_method | anchor_insert | anchor_delete | anchor_replace\n"
|
||||
"- className: string (defaults to 'name' if omitted on method/class ops)\n"
|
||||
"- methodName: string (required for replace_method, delete_method)\n"
|
||||
"- replacement: string (required for replace_method, insert_method)\n"
|
||||
"- position: start | end | after | before (insert_method only)\n"
|
||||
"- afterMethodName / beforeMethodName: string (required when position='after'/'before')\n"
|
||||
"- anchor: regex string (for anchor_* ops)\n"
|
||||
"- text: string (for anchor_insert/anchor_replace)\n\n"
|
||||
"Do NOT use: new_method, anchor_method, content, newText (aliases accepted but normalized).\n\n"
|
||||
"Examples:\n"
|
||||
"1) Replace a method:\n"
|
||||
"{ 'name':'SmartReach','path':'Assets/Scripts/Interaction','edits':[\n"
|
||||
" { 'op':'replace_method','className':'SmartReach','methodName':'HasTarget',\n"
|
||||
" 'replacement':'public bool HasTarget(){ return currentTarget!=null; }' }\n"
|
||||
"], 'options':{'validate':'standard','refresh':'immediate'} }\n\n"
|
||||
"2) Insert a method after another:\n"
|
||||
"{ 'name':'SmartReach','path':'Assets/Scripts/Interaction','edits':[\n"
|
||||
" { 'op':'insert_method','className':'SmartReach','replacement':'public void PrintSeries(){ Debug.Log(seriesName); }',\n"
|
||||
" 'position':'after','afterMethodName':'GetCurrentTarget' }\n"
|
||||
"] }\n"
|
||||
))
|
||||
def script_apply_edits(
|
||||
ctx: Context,
|
||||
name: str,
|
||||
path: str,
|
||||
edits: List[Dict[str, Any]],
|
||||
options: Dict[str, Any] | None = None,
|
||||
script_type: str = "MonoBehaviour",
|
||||
namespace: str = "",
|
||||
) -> Dict[str, Any]:
|
||||
# Normalize locator first so downstream calls target the correct script file.
|
||||
name, path = _normalize_script_locator(name, path)
|
||||
|
||||
# No NL path: clients must provide structured edits in 'edits'.
|
||||
|
||||
# Normalize unsupported or aliased ops to known structured/text paths
|
||||
def _unwrap_and_alias(edit: Dict[str, Any]) -> Dict[str, Any]:
|
||||
# Unwrap single-key wrappers like {"replace_method": {...}}
|
||||
for wrapper_key in (
|
||||
"replace_method","insert_method","delete_method",
|
||||
"replace_class","delete_class",
|
||||
"anchor_insert","anchor_replace","anchor_delete",
|
||||
):
|
||||
if wrapper_key in edit and isinstance(edit[wrapper_key], dict):
|
||||
inner = dict(edit[wrapper_key])
|
||||
inner["op"] = wrapper_key
|
||||
edit = inner
|
||||
break
|
||||
|
||||
e = dict(edit)
|
||||
op = (e.get("op") or e.get("operation") or e.get("type") or e.get("mode") or "").strip().lower()
|
||||
if op:
|
||||
e["op"] = op
|
||||
|
||||
# Common field aliases
|
||||
if "class_name" in e and "className" not in e:
|
||||
e["className"] = e.pop("class_name")
|
||||
if "class" in e and "className" not in e:
|
||||
e["className"] = e.pop("class")
|
||||
if "method_name" in e and "methodName" not in e:
|
||||
e["methodName"] = e.pop("method_name")
|
||||
# Some clients use a generic 'target' for method name
|
||||
if "target" in e and "methodName" not in e:
|
||||
e["methodName"] = e.pop("target")
|
||||
if "method" in e and "methodName" not in e:
|
||||
e["methodName"] = e.pop("method")
|
||||
if "new_content" in e and "replacement" not in e:
|
||||
e["replacement"] = e.pop("new_content")
|
||||
if "newMethod" in e and "replacement" not in e:
|
||||
e["replacement"] = e.pop("newMethod")
|
||||
if "new_method" in e and "replacement" not in e:
|
||||
e["replacement"] = e.pop("new_method")
|
||||
if "content" in e and "replacement" not in e:
|
||||
e["replacement"] = e.pop("content")
|
||||
if "after" in e and "afterMethodName" not in e:
|
||||
e["afterMethodName"] = e.pop("after")
|
||||
if "after_method" in e and "afterMethodName" not in e:
|
||||
e["afterMethodName"] = e.pop("after_method")
|
||||
if "before" in e and "beforeMethodName" not in e:
|
||||
e["beforeMethodName"] = e.pop("before")
|
||||
if "before_method" in e and "beforeMethodName" not in e:
|
||||
e["beforeMethodName"] = e.pop("before_method")
|
||||
# anchor_method → before/after based on position (default after)
|
||||
if "anchor_method" in e:
|
||||
anchor = e.pop("anchor_method")
|
||||
pos = (e.get("position") or "after").strip().lower()
|
||||
if pos == "before" and "beforeMethodName" not in e:
|
||||
e["beforeMethodName"] = anchor
|
||||
elif "afterMethodName" not in e:
|
||||
e["afterMethodName"] = anchor
|
||||
if "anchorText" in e and "anchor" not in e:
|
||||
e["anchor"] = e.pop("anchorText")
|
||||
if "pattern" in e and "anchor" not in e and e.get("op") and e["op"].startswith("anchor_"):
|
||||
e["anchor"] = e.pop("pattern")
|
||||
if "newText" in e and "text" not in e:
|
||||
e["text"] = e.pop("newText")
|
||||
|
||||
# CI compatibility (T‑A/T‑E):
|
||||
# Accept method-anchored anchor_insert and upgrade to insert_method
|
||||
# Example incoming shape:
|
||||
# {"op":"anchor_insert","afterMethodName":"GetCurrentTarget","text":"..."}
|
||||
if (
|
||||
e.get("op") == "anchor_insert"
|
||||
and not e.get("anchor")
|
||||
and (e.get("afterMethodName") or e.get("beforeMethodName"))
|
||||
):
|
||||
e["op"] = "insert_method"
|
||||
if "replacement" not in e:
|
||||
e["replacement"] = e.get("text", "")
|
||||
|
||||
# LSP-like range edit -> replace_range
|
||||
if "range" in e and isinstance(e["range"], dict):
|
||||
rng = e.pop("range")
|
||||
start = rng.get("start", {})
|
||||
end = rng.get("end", {})
|
||||
# Convert 0-based to 1-based line/col
|
||||
e["op"] = "replace_range"
|
||||
e["startLine"] = int(start.get("line", 0)) + 1
|
||||
e["startCol"] = int(start.get("character", 0)) + 1
|
||||
e["endLine"] = int(end.get("line", 0)) + 1
|
||||
e["endCol"] = int(end.get("character", 0)) + 1
|
||||
if "newText" in edit and "text" not in e:
|
||||
e["text"] = edit.get("newText", "")
|
||||
return e
|
||||
|
||||
normalized_edits: List[Dict[str, Any]] = []
|
||||
for raw in edits or []:
|
||||
e = _unwrap_and_alias(raw)
|
||||
op = (e.get("op") or e.get("operation") or e.get("type") or e.get("mode") or "").strip().lower()
|
||||
|
||||
# Default className to script name if missing on structured method/class ops
|
||||
if op in ("replace_class","delete_class","replace_method","delete_method","insert_method") and not e.get("className"):
|
||||
e["className"] = name
|
||||
|
||||
# Map common aliases for text ops
|
||||
if op in ("text_replace",):
|
||||
e["op"] = "replace_range"
|
||||
normalized_edits.append(e)
|
||||
continue
|
||||
if op in ("regex_delete",):
|
||||
e["op"] = "regex_replace"
|
||||
e.setdefault("text", "")
|
||||
normalized_edits.append(e)
|
||||
continue
|
||||
if op == "regex_replace" and ("replacement" not in e):
|
||||
if "text" in e:
|
||||
e["replacement"] = e.get("text", "")
|
||||
elif "insert" in e or "content" in e:
|
||||
e["replacement"] = e.get("insert") or e.get("content") or ""
|
||||
if op == "anchor_insert" and not (e.get("text") or e.get("insert") or e.get("content") or e.get("replacement")):
|
||||
e["op"] = "anchor_delete"
|
||||
normalized_edits.append(e)
|
||||
continue
|
||||
normalized_edits.append(e)
|
||||
|
||||
edits = normalized_edits
|
||||
normalized_for_echo = edits
|
||||
|
||||
# Validate required fields and produce machine-parsable hints
|
||||
def error_with_hint(message: str, expected: Dict[str, Any], suggestion: Dict[str, Any]) -> Dict[str, Any]:
|
||||
return _err("missing_field", message, expected=expected, rewrite=suggestion, normalized=normalized_for_echo)
|
||||
|
||||
for e in edits or []:
|
||||
op = e.get("op", "")
|
||||
if op == "replace_method":
|
||||
if not e.get("methodName"):
|
||||
return error_with_hint(
|
||||
"replace_method requires 'methodName'.",
|
||||
{"op": "replace_method", "required": ["className", "methodName", "replacement"]},
|
||||
{"edits[0].methodName": "HasTarget"}
|
||||
)
|
||||
if not (e.get("replacement") or e.get("text")):
|
||||
return error_with_hint(
|
||||
"replace_method requires 'replacement' (inline or base64).",
|
||||
{"op": "replace_method", "required": ["className", "methodName", "replacement"]},
|
||||
{"edits[0].replacement": "public bool X(){ return true; }"}
|
||||
)
|
||||
elif op == "insert_method":
|
||||
if not (e.get("replacement") or e.get("text")):
|
||||
return error_with_hint(
|
||||
"insert_method requires a non-empty 'replacement'.",
|
||||
{"op": "insert_method", "required": ["className", "replacement"], "position": {"after_requires": "afterMethodName", "before_requires": "beforeMethodName"}},
|
||||
{"edits[0].replacement": "public void PrintSeries(){ Debug.Log(\"1,2,3\"); }"}
|
||||
)
|
||||
pos = (e.get("position") or "").lower()
|
||||
if pos == "after" and not e.get("afterMethodName"):
|
||||
return error_with_hint(
|
||||
"insert_method with position='after' requires 'afterMethodName'.",
|
||||
{"op": "insert_method", "position": {"after_requires": "afterMethodName"}},
|
||||
{"edits[0].afterMethodName": "GetCurrentTarget"}
|
||||
)
|
||||
if pos == "before" and not e.get("beforeMethodName"):
|
||||
return error_with_hint(
|
||||
"insert_method with position='before' requires 'beforeMethodName'.",
|
||||
{"op": "insert_method", "position": {"before_requires": "beforeMethodName"}},
|
||||
{"edits[0].beforeMethodName": "GetCurrentTarget"}
|
||||
)
|
||||
elif op == "delete_method":
|
||||
if not e.get("methodName"):
|
||||
return error_with_hint(
|
||||
"delete_method requires 'methodName'.",
|
||||
{"op": "delete_method", "required": ["className", "methodName"]},
|
||||
{"edits[0].methodName": "PrintSeries"}
|
||||
)
|
||||
elif op in ("anchor_insert", "anchor_replace", "anchor_delete"):
|
||||
if not e.get("anchor"):
|
||||
return error_with_hint(
|
||||
f"{op} requires 'anchor' (regex).",
|
||||
{"op": op, "required": ["anchor"]},
|
||||
{"edits[0].anchor": "(?m)^\\s*public\\s+bool\\s+HasTarget\\s*\\("}
|
||||
)
|
||||
if op in ("anchor_insert", "anchor_replace") and not (e.get("text") or e.get("replacement")):
|
||||
return error_with_hint(
|
||||
f"{op} requires 'text'.",
|
||||
{"op": op, "required": ["anchor", "text"]},
|
||||
{"edits[0].text": "/* comment */\n"}
|
||||
)
|
||||
|
||||
# Decide routing: structured vs text vs mixed
|
||||
STRUCT = {"replace_class","delete_class","replace_method","delete_method","insert_method","anchor_delete","anchor_replace","anchor_insert"}
|
||||
TEXT = {"prepend","append","replace_range","regex_replace"}
|
||||
ops_set = { (e.get("op") or "").lower() for e in edits or [] }
|
||||
all_struct = ops_set.issubset(STRUCT)
|
||||
all_text = ops_set.issubset(TEXT)
|
||||
mixed = not (all_struct or all_text)
|
||||
|
||||
# If everything is structured (method/class/anchor ops), forward directly to Unity's structured editor.
|
||||
if all_struct:
|
||||
opts2 = dict(options or {})
|
||||
# Do not force sequential; allow server default (atomic) unless caller requests otherwise
|
||||
opts2.setdefault("refresh", "immediate")
|
||||
params_struct: Dict[str, Any] = {
|
||||
"action": "edit",
|
||||
"name": name,
|
||||
"path": path,
|
||||
"namespace": namespace,
|
||||
"scriptType": script_type,
|
||||
"edits": edits,
|
||||
"options": opts2,
|
||||
}
|
||||
resp_struct = send_command_with_retry("manage_script", params_struct)
|
||||
return _with_norm(resp_struct if isinstance(resp_struct, dict) else {"success": False, "message": str(resp_struct)}, normalized_for_echo, routing="structured")
|
||||
|
||||
# 1) read from Unity
|
||||
read_resp = send_command_with_retry("manage_script", {
|
||||
"action": "read",
|
||||
"name": name,
|
||||
"path": path,
|
||||
"namespace": namespace,
|
||||
"scriptType": script_type,
|
||||
})
|
||||
if not isinstance(read_resp, dict) or not read_resp.get("success"):
|
||||
return read_resp if isinstance(read_resp, dict) else {"success": False, "message": str(read_resp)}
|
||||
|
||||
data = read_resp.get("data") or read_resp.get("result", {}).get("data") or {}
|
||||
contents = data.get("contents")
|
||||
if contents is None and data.get("contentsEncoded") and data.get("encodedContents"):
|
||||
contents = base64.b64decode(data["encodedContents"]).decode("utf-8")
|
||||
if contents is None:
|
||||
return {"success": False, "message": "No contents returned from Unity read."}
|
||||
|
||||
# Optional preview/dry-run: apply locally and return diff without writing
|
||||
preview = bool((options or {}).get("preview"))
|
||||
|
||||
# If we have a mixed batch (TEXT + STRUCT), apply text first with precondition, then structured
|
||||
if mixed:
|
||||
text_edits = [e for e in edits or [] if (e.get("op") or "").lower() in TEXT]
|
||||
struct_edits = [e for e in edits or [] if (e.get("op") or "").lower() in STRUCT]
|
||||
try:
|
||||
base_text = contents
|
||||
def line_col_from_index(idx: int) -> Tuple[int, int]:
|
||||
line = base_text.count("\n", 0, idx) + 1
|
||||
last_nl = base_text.rfind("\n", 0, idx)
|
||||
col = (idx - (last_nl + 1)) + 1 if last_nl >= 0 else idx + 1
|
||||
return line, col
|
||||
|
||||
at_edits: List[Dict[str, Any]] = []
|
||||
import re as _re
|
||||
for e in text_edits:
|
||||
opx = (e.get("op") or e.get("operation") or e.get("type") or e.get("mode") or "").strip().lower()
|
||||
text_field = e.get("text") or e.get("insert") or e.get("content") or e.get("replacement") or ""
|
||||
if opx == "anchor_insert":
|
||||
anchor = e.get("anchor") or ""
|
||||
position = (e.get("position") or "after").lower()
|
||||
flags = _re.MULTILINE | (_re.IGNORECASE if e.get("ignore_case") else 0)
|
||||
try:
|
||||
regex_obj = _re.compile(anchor, flags)
|
||||
except Exception as ex:
|
||||
return _with_norm(_err("bad_regex", f"Invalid anchor regex: {ex}", normalized=normalized_for_echo, routing="mixed/text-first", extra={"hint": "Escape parentheses/braces or use a simpler anchor."}), normalized_for_echo, routing="mixed/text-first")
|
||||
m = regex_obj.search(base_text)
|
||||
if not m:
|
||||
return _with_norm({"success": False, "code": "anchor_not_found", "message": f"anchor not found: {anchor}"}, normalized_for_echo, routing="mixed/text-first")
|
||||
idx = m.start() if position == "before" else m.end()
|
||||
# Normalize insertion to avoid jammed methods
|
||||
text_field_norm = text_field
|
||||
if not text_field_norm.startswith("\n"):
|
||||
text_field_norm = "\n" + text_field_norm
|
||||
if not text_field_norm.endswith("\n"):
|
||||
text_field_norm = text_field_norm + "\n"
|
||||
sl, sc = line_col_from_index(idx)
|
||||
at_edits.append({"startLine": sl, "startCol": sc, "endLine": sl, "endCol": sc, "newText": text_field_norm})
|
||||
# do not mutate base_text when building atomic spans
|
||||
elif opx == "replace_range":
|
||||
if all(k in e for k in ("startLine","startCol","endLine","endCol")):
|
||||
at_edits.append({
|
||||
"startLine": int(e.get("startLine", 1)),
|
||||
"startCol": int(e.get("startCol", 1)),
|
||||
"endLine": int(e.get("endLine", 1)),
|
||||
"endCol": int(e.get("endCol", 1)),
|
||||
"newText": text_field
|
||||
})
|
||||
else:
|
||||
return _with_norm(_err("missing_field", "replace_range requires startLine/startCol/endLine/endCol", normalized=normalized_for_echo, routing="mixed/text-first"), normalized_for_echo, routing="mixed/text-first")
|
||||
elif opx == "regex_replace":
|
||||
pattern = e.get("pattern") or ""
|
||||
try:
|
||||
regex_obj = _re.compile(pattern, _re.MULTILINE | (_re.IGNORECASE if e.get("ignore_case") else 0))
|
||||
except Exception as ex:
|
||||
return _with_norm(_err("bad_regex", f"Invalid regex pattern: {ex}", normalized=normalized_for_echo, routing="mixed/text-first", extra={"hint": "Escape special chars or prefer structured delete for methods."}), normalized_for_echo, routing="mixed/text-first")
|
||||
m = regex_obj.search(base_text)
|
||||
if not m:
|
||||
continue
|
||||
# Expand $1, $2... in replacement using this match
|
||||
def _expand_dollars(rep: str) -> str:
|
||||
return _re.sub(r"\$(\d+)", lambda g: m.group(int(g.group(1))) or "", rep)
|
||||
repl = _expand_dollars(text_field)
|
||||
sl, sc = line_col_from_index(m.start())
|
||||
el, ec = line_col_from_index(m.end())
|
||||
at_edits.append({"startLine": sl, "startCol": sc, "endLine": el, "endCol": ec, "newText": repl})
|
||||
# do not mutate base_text when building atomic spans
|
||||
elif opx in ("prepend","append"):
|
||||
if opx == "prepend":
|
||||
sl, sc = 1, 1
|
||||
at_edits.append({"startLine": sl, "startCol": sc, "endLine": sl, "endCol": sc, "newText": text_field})
|
||||
# prepend can be applied atomically without local mutation
|
||||
else:
|
||||
# Insert at true EOF position (handles both \n and \r\n correctly)
|
||||
eof_idx = len(base_text)
|
||||
sl, sc = line_col_from_index(eof_idx)
|
||||
new_text = ("\n" if not base_text.endswith("\n") else "") + text_field
|
||||
at_edits.append({"startLine": sl, "startCol": sc, "endLine": sl, "endCol": sc, "newText": new_text})
|
||||
# do not mutate base_text when building atomic spans
|
||||
else:
|
||||
return _with_norm(_err("unknown_op", f"Unsupported text edit op: {opx}", normalized=normalized_for_echo, routing="mixed/text-first"), normalized_for_echo, routing="mixed/text-first")
|
||||
|
||||
import hashlib
|
||||
sha = hashlib.sha256(base_text.encode("utf-8")).hexdigest()
|
||||
if at_edits:
|
||||
params_text: Dict[str, Any] = {
|
||||
"action": "apply_text_edits",
|
||||
"name": name,
|
||||
"path": path,
|
||||
"namespace": namespace,
|
||||
"scriptType": script_type,
|
||||
"edits": at_edits,
|
||||
"precondition_sha256": sha,
|
||||
"options": {"refresh": "immediate", "validate": (options or {}).get("validate", "standard"), "applyMode": ("atomic" if len(at_edits) > 1 else (options or {}).get("applyMode", "sequential"))}
|
||||
}
|
||||
resp_text = send_command_with_retry("manage_script", params_text)
|
||||
if not (isinstance(resp_text, dict) and resp_text.get("success")):
|
||||
return _with_norm(resp_text if isinstance(resp_text, dict) else {"success": False, "message": str(resp_text)}, normalized_for_echo, routing="mixed/text-first")
|
||||
except Exception as e:
|
||||
return _with_norm({"success": False, "message": f"Text edit conversion failed: {e}"}, normalized_for_echo, routing="mixed/text-first")
|
||||
|
||||
if struct_edits:
|
||||
opts2 = dict(options or {})
|
||||
# Let server decide; do not force sequential
|
||||
opts2.setdefault("refresh", "immediate")
|
||||
params_struct: Dict[str, Any] = {
|
||||
"action": "edit",
|
||||
"name": name,
|
||||
"path": path,
|
||||
"namespace": namespace,
|
||||
"scriptType": script_type,
|
||||
"edits": struct_edits,
|
||||
"options": opts2
|
||||
}
|
||||
resp_struct = send_command_with_retry("manage_script", params_struct)
|
||||
return _with_norm(resp_struct if isinstance(resp_struct, dict) else {"success": False, "message": str(resp_struct)}, normalized_for_echo, routing="mixed/text-first")
|
||||
|
||||
return _with_norm({"success": True, "message": "Applied text edits (no structured ops)"}, normalized_for_echo, routing="mixed/text-first")
|
||||
|
||||
# If the edits are text-ops, prefer sending them to Unity's apply_text_edits with precondition
|
||||
# so header guards and validation run on the C# side.
|
||||
# Supported conversions: anchor_insert, replace_range, regex_replace (first match only).
|
||||
text_ops = { (e.get("op") or e.get("operation") or e.get("type") or e.get("mode") or "").strip().lower() for e in (edits or []) }
|
||||
structured_kinds = {"replace_class","delete_class","replace_method","delete_method","insert_method","anchor_insert"}
|
||||
if not text_ops.issubset(structured_kinds):
|
||||
# Convert to apply_text_edits payload
|
||||
try:
|
||||
base_text = contents
|
||||
def line_col_from_index(idx: int) -> Tuple[int, int]:
|
||||
# 1-based line/col against base buffer
|
||||
line = base_text.count("\n", 0, idx) + 1
|
||||
last_nl = base_text.rfind("\n", 0, idx)
|
||||
col = (idx - (last_nl + 1)) + 1 if last_nl >= 0 else idx + 1
|
||||
return line, col
|
||||
|
||||
at_edits: List[Dict[str, Any]] = []
|
||||
import re as _re
|
||||
for e in edits or []:
|
||||
op = (e.get("op") or e.get("operation") or e.get("type") or e.get("mode") or "").strip().lower()
|
||||
# aliasing for text field
|
||||
text_field = e.get("text") or e.get("insert") or e.get("content") or ""
|
||||
if op == "anchor_insert":
|
||||
anchor = e.get("anchor") or ""
|
||||
position = (e.get("position") or "after").lower()
|
||||
# Early regex compile with helpful errors
|
||||
try:
|
||||
regex_obj = _re.compile(anchor, _re.MULTILINE)
|
||||
except Exception as ex:
|
||||
return _with_norm(_err("bad_regex", f"Invalid anchor regex: {ex}", normalized=normalized_for_echo, routing="text", extra={"hint": "Escape parentheses/braces or use a simpler anchor."}), normalized_for_echo, routing="text")
|
||||
m = regex_obj.search(base_text)
|
||||
if not m:
|
||||
return _with_norm({"success": False, "code": "anchor_not_found", "message": f"anchor not found: {anchor}"}, normalized_for_echo, routing="text")
|
||||
idx = m.start() if position == "before" else m.end()
|
||||
# Normalize insertion newlines
|
||||
if text_field and not text_field.startswith("\n"):
|
||||
text_field = "\n" + text_field
|
||||
if text_field and not text_field.endswith("\n"):
|
||||
text_field = text_field + "\n"
|
||||
sl, sc = line_col_from_index(idx)
|
||||
at_edits.append({
|
||||
"startLine": sl,
|
||||
"startCol": sc,
|
||||
"endLine": sl,
|
||||
"endCol": sc,
|
||||
"newText": text_field or ""
|
||||
})
|
||||
# Do not mutate base buffer when building an atomic batch
|
||||
elif op == "replace_range":
|
||||
# Directly forward if already in line/col form
|
||||
if "startLine" in e:
|
||||
at_edits.append({
|
||||
"startLine": int(e.get("startLine", 1)),
|
||||
"startCol": int(e.get("startCol", 1)),
|
||||
"endLine": int(e.get("endLine", 1)),
|
||||
"endCol": int(e.get("endCol", 1)),
|
||||
"newText": text_field
|
||||
})
|
||||
else:
|
||||
# If only indices provided, skip (we don't support index-based here)
|
||||
return _with_norm({"success": False, "code": "missing_field", "message": "replace_range requires startLine/startCol/endLine/endCol"}, normalized_for_echo, routing="text")
|
||||
elif op == "regex_replace":
|
||||
pattern = e.get("pattern") or ""
|
||||
repl = text_field
|
||||
flags = _re.MULTILINE | (_re.IGNORECASE if e.get("ignore_case") else 0)
|
||||
# Early compile for clearer error messages
|
||||
try:
|
||||
regex_obj = _re.compile(pattern, flags)
|
||||
except Exception as ex:
|
||||
return _with_norm(_err("bad_regex", f"Invalid regex pattern: {ex}", normalized=normalized_for_echo, routing="text", extra={"hint": "Escape special chars or prefer structured delete for methods."}), normalized_for_echo, routing="text")
|
||||
m = regex_obj.search(base_text)
|
||||
if not m:
|
||||
continue
|
||||
# Expand $1, $2... backrefs in replacement using the first match (consistent with mixed-path behavior)
|
||||
def _expand_dollars(rep: str) -> str:
|
||||
return _re.sub(r"\$(\d+)", lambda g: m.group(int(g.group(1))) or "", rep)
|
||||
repl_expanded = _expand_dollars(repl)
|
||||
# Preview structural balance after replacement; refuse destructive deletes
|
||||
preview = base_text[:m.start()] + repl_expanded + base_text[m.end():]
|
||||
if not _is_structurally_balanced(preview):
|
||||
return _with_norm(_err("validation_failed", "regex_replace would unbalance braces/parentheses; prefer delete_method",
|
||||
normalized=normalized_for_echo, routing="text",
|
||||
extra={"status": "validation_failed", "hint": "Use script_apply_edits delete_method for method removal"}), normalized_for_echo, routing="text")
|
||||
sl, sc = line_col_from_index(m.start())
|
||||
el, ec = line_col_from_index(m.end())
|
||||
at_edits.append({
|
||||
"startLine": sl,
|
||||
"startCol": sc,
|
||||
"endLine": el,
|
||||
"endCol": ec,
|
||||
"newText": repl_expanded
|
||||
})
|
||||
# Do not mutate base buffer when building an atomic batch
|
||||
else:
|
||||
return _with_norm({"success": False, "code": "unsupported_op", "message": f"Unsupported text edit op for server-side apply_text_edits: {op}"}, normalized_for_echo, routing="text")
|
||||
|
||||
if not at_edits:
|
||||
return _with_norm({"success": False, "code": "no_spans", "message": "No applicable text edit spans computed (anchor not found or zero-length)."}, normalized_for_echo, routing="text")
|
||||
|
||||
# Send to Unity with precondition SHA to enforce guards and immediate refresh
|
||||
import hashlib
|
||||
sha = hashlib.sha256(base_text.encode("utf-8")).hexdigest()
|
||||
params: Dict[str, Any] = {
|
||||
"action": "apply_text_edits",
|
||||
"name": name,
|
||||
"path": path,
|
||||
"namespace": namespace,
|
||||
"scriptType": script_type,
|
||||
"edits": at_edits,
|
||||
"precondition_sha256": sha,
|
||||
"options": {
|
||||
"refresh": "immediate",
|
||||
"validate": (options or {}).get("validate", "standard"),
|
||||
"applyMode": ("atomic" if len(at_edits) > 1 else (options or {}).get("applyMode", "sequential"))
|
||||
}
|
||||
}
|
||||
resp = send_command_with_retry("manage_script", params)
|
||||
return _with_norm(
|
||||
resp if isinstance(resp, dict) else {"success": False, "message": str(resp)},
|
||||
normalized_for_echo,
|
||||
routing="text"
|
||||
)
|
||||
except Exception as e:
|
||||
return _with_norm({"success": False, "code": "conversion_failed", "message": f"Edit conversion failed: {e}"}, normalized_for_echo, routing="text")
|
||||
|
||||
# For regex_replace, honor preview consistently: if preview=true, always return diff without writing.
|
||||
# If confirm=false (default) and preview not requested, return diff and instruct confirm=true to apply.
|
||||
if "regex_replace" in text_ops and (preview or not (options or {}).get("confirm")):
|
||||
try:
|
||||
preview_text = _apply_edits_locally(contents, edits)
|
||||
import difflib
|
||||
diff = list(difflib.unified_diff(contents.splitlines(), preview_text.splitlines(), fromfile="before", tofile="after", n=2))
|
||||
if len(diff) > 800:
|
||||
diff = diff[:800] + ["... (diff truncated) ..."]
|
||||
if preview:
|
||||
return {"success": True, "message": "Preview only (no write)", "data": {"diff": "\n".join(diff), "normalizedEdits": normalized_for_echo}}
|
||||
return _with_norm({"success": False, "message": "Preview diff; set options.confirm=true to apply.", "data": {"diff": "\n".join(diff)}}, normalized_for_echo, routing="text")
|
||||
except Exception as e:
|
||||
return _with_norm({"success": False, "code": "preview_failed", "message": f"Preview failed: {e}"}, normalized_for_echo, routing="text")
|
||||
# 2) apply edits locally (only if not text-ops)
|
||||
try:
|
||||
new_contents = _apply_edits_locally(contents, edits)
|
||||
except Exception as e:
|
||||
return {"success": False, "message": f"Edit application failed: {e}"}
|
||||
|
||||
# Short-circuit no-op edits to avoid false "applied" reports downstream
|
||||
if new_contents == contents:
|
||||
return _with_norm({
|
||||
"success": True,
|
||||
"message": "No-op: contents unchanged",
|
||||
"data": {"no_op": True, "evidence": {"reason": "identical_content"}}
|
||||
}, normalized_for_echo, routing="text")
|
||||
|
||||
if preview:
|
||||
# Produce a compact unified diff limited to small context
|
||||
import difflib
|
||||
a = contents.splitlines()
|
||||
b = new_contents.splitlines()
|
||||
diff = list(difflib.unified_diff(a, b, fromfile="before", tofile="after", n=3))
|
||||
# Limit diff size to keep responses small
|
||||
if len(diff) > 2000:
|
||||
diff = diff[:2000] + ["... (diff truncated) ..."]
|
||||
return {"success": True, "message": "Preview only (no write)", "data": {"diff": "\n".join(diff), "normalizedEdits": normalized_for_echo}}
|
||||
|
||||
# 3) update to Unity
|
||||
# Default refresh/validate for natural usage on text path as well
|
||||
options = dict(options or {})
|
||||
options.setdefault("validate", "standard")
|
||||
options.setdefault("refresh", "immediate")
|
||||
|
||||
import hashlib
|
||||
# Compute the SHA of the current file contents for the precondition
|
||||
old_lines = contents.splitlines(keepends=True)
|
||||
end_line = len(old_lines) + 1 # 1-based exclusive end
|
||||
sha = hashlib.sha256(contents.encode("utf-8")).hexdigest()
|
||||
|
||||
# Apply a whole-file text edit rather than the deprecated 'update' action
|
||||
params = {
|
||||
"action": "apply_text_edits",
|
||||
"name": name,
|
||||
"path": path,
|
||||
"namespace": namespace,
|
||||
"scriptType": script_type,
|
||||
"edits": [
|
||||
{
|
||||
"startLine": 1,
|
||||
"startCol": 1,
|
||||
"endLine": end_line,
|
||||
"endCol": 1,
|
||||
"newText": new_contents,
|
||||
}
|
||||
],
|
||||
"precondition_sha256": sha,
|
||||
"options": options or {"validate": "standard", "refresh": "immediate"},
|
||||
}
|
||||
|
||||
write_resp = send_command_with_retry("manage_script", params)
|
||||
return _with_norm(
|
||||
write_resp if isinstance(write_resp, dict)
|
||||
else {"success": False, "message": str(write_resp)},
|
||||
normalized_for_echo,
|
||||
routing="text",
|
||||
)
|
||||
|
||||
|
||||
|
||||
|
||||
# safe_script_edit removed to simplify API; clients should call script_apply_edits directly
|
||||
|
|
@ -0,0 +1,357 @@
|
|||
"""
|
||||
Resource wrapper tools so clients that do not expose MCP resources primitives
|
||||
can still list and read files via normal tools. These call into the same
|
||||
safe path logic (re-implemented here to avoid importing server.py).
|
||||
"""
|
||||
from __future__ import annotations
|
||||
|
||||
from typing import Dict, Any, List
|
||||
import re
|
||||
from pathlib import Path
|
||||
from urllib.parse import urlparse, unquote
|
||||
import fnmatch
|
||||
import hashlib
|
||||
import os
|
||||
|
||||
from mcp.server.fastmcp import FastMCP, Context
|
||||
from unity_connection import send_command_with_retry
|
||||
|
||||
|
||||
def _resolve_project_root(override: str | None) -> Path:
|
||||
# 1) Explicit override
|
||||
if override:
|
||||
pr = Path(override).expanduser().resolve()
|
||||
if (pr / "Assets").exists():
|
||||
return pr
|
||||
# 2) Environment
|
||||
env = os.environ.get("UNITY_PROJECT_ROOT")
|
||||
if env:
|
||||
env_path = Path(env).expanduser()
|
||||
# If UNITY_PROJECT_ROOT is relative, resolve against repo root (cwd's repo) instead of src dir
|
||||
pr = (Path.cwd() / env_path).resolve() if not env_path.is_absolute() else env_path.resolve()
|
||||
if (pr / "Assets").exists():
|
||||
return pr
|
||||
# 3) Ask Unity via manage_editor.get_project_root
|
||||
try:
|
||||
resp = send_command_with_retry("manage_editor", {"action": "get_project_root"})
|
||||
if isinstance(resp, dict) and resp.get("success"):
|
||||
pr = Path(resp.get("data", {}).get("projectRoot", "")).expanduser().resolve()
|
||||
if pr and (pr / "Assets").exists():
|
||||
return pr
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
# 4) Walk up from CWD to find a Unity project (Assets + ProjectSettings)
|
||||
cur = Path.cwd().resolve()
|
||||
for _ in range(6):
|
||||
if (cur / "Assets").exists() and (cur / "ProjectSettings").exists():
|
||||
return cur
|
||||
if cur.parent == cur:
|
||||
break
|
||||
cur = cur.parent
|
||||
# 5) Search downwards (shallow) from repo root for first folder with Assets + ProjectSettings
|
||||
try:
|
||||
import os as _os
|
||||
root = Path.cwd().resolve()
|
||||
max_depth = 3
|
||||
for dirpath, dirnames, _ in _os.walk(root):
|
||||
rel = Path(dirpath).resolve()
|
||||
try:
|
||||
depth = len(rel.relative_to(root).parts)
|
||||
except Exception:
|
||||
# Unrelated mount/permission edge; skip deeper traversal
|
||||
dirnames[:] = []
|
||||
continue
|
||||
if depth > max_depth:
|
||||
# Prune deeper traversal
|
||||
dirnames[:] = []
|
||||
continue
|
||||
if (rel / "Assets").exists() and (rel / "ProjectSettings").exists():
|
||||
return rel
|
||||
except Exception:
|
||||
pass
|
||||
# 6) Fallback: CWD
|
||||
return Path.cwd().resolve()
|
||||
|
||||
|
||||
def _resolve_safe_path_from_uri(uri: str, project: Path) -> Path | None:
|
||||
raw: str | None = None
|
||||
if uri.startswith("unity://path/"):
|
||||
raw = uri[len("unity://path/"):]
|
||||
elif uri.startswith("file://"):
|
||||
parsed = urlparse(uri)
|
||||
raw = unquote(parsed.path or "")
|
||||
# On Windows, urlparse('file:///C:/x') -> path='/C:/x'. Strip the leading slash for drive letters.
|
||||
try:
|
||||
import os as _os
|
||||
if _os.name == "nt" and raw.startswith("/") and re.match(r"^/[A-Za-z]:/", raw):
|
||||
raw = raw[1:]
|
||||
# UNC paths: file://server/share -> netloc='server', path='/share'. Treat as \\\\server/share
|
||||
if _os.name == "nt" and parsed.netloc:
|
||||
raw = f"//{parsed.netloc}{raw}"
|
||||
except Exception:
|
||||
pass
|
||||
elif uri.startswith("Assets/"):
|
||||
raw = uri
|
||||
if raw is None:
|
||||
return None
|
||||
# Normalize separators early
|
||||
raw = raw.replace("\\", "/")
|
||||
p = (project / raw).resolve()
|
||||
try:
|
||||
p.relative_to(project)
|
||||
except ValueError:
|
||||
return None
|
||||
return p
|
||||
|
||||
|
||||
def register_resource_tools(mcp: FastMCP) -> None:
|
||||
"""Registers list_resources and read_resource wrapper tools."""
|
||||
|
||||
@mcp.tool(description=(
|
||||
"List project URIs (unity://path/...) under a folder (default: Assets).\n\n"
|
||||
"Args: pattern (glob, default *.cs), under (folder under project root), limit, project_root.\n"
|
||||
"Security: restricted to Assets/ subtree; symlinks are resolved and must remain under Assets/.\n"
|
||||
"Notes: Only .cs files are returned by default; always appends unity://spec/script-edits.\n"
|
||||
))
|
||||
async def list_resources(
|
||||
ctx: Context | None = None,
|
||||
pattern: str | None = "*.cs",
|
||||
under: str = "Assets",
|
||||
limit: int = 200,
|
||||
project_root: str | None = None,
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Lists project URIs (unity://path/...) under a folder (default: Assets).
|
||||
- pattern: glob like *.cs or *.shader (None to list all files)
|
||||
- under: relative folder under project root
|
||||
- limit: max results
|
||||
"""
|
||||
try:
|
||||
project = _resolve_project_root(project_root)
|
||||
base = (project / under).resolve()
|
||||
try:
|
||||
base.relative_to(project)
|
||||
except ValueError:
|
||||
return {"success": False, "error": "Base path must be under project root"}
|
||||
# Enforce listing only under Assets
|
||||
try:
|
||||
base.relative_to(project / "Assets")
|
||||
except ValueError:
|
||||
return {"success": False, "error": "Listing is restricted to Assets/"}
|
||||
|
||||
matches: List[str] = []
|
||||
for p in base.rglob("*"):
|
||||
if not p.is_file():
|
||||
continue
|
||||
# Resolve symlinks and ensure the real path stays under project/Assets
|
||||
try:
|
||||
rp = p.resolve()
|
||||
rp.relative_to(project / "Assets")
|
||||
except Exception:
|
||||
continue
|
||||
# Enforce .cs extension regardless of provided pattern
|
||||
if p.suffix.lower() != ".cs":
|
||||
continue
|
||||
if pattern and not fnmatch.fnmatch(p.name, pattern):
|
||||
continue
|
||||
rel = p.relative_to(project).as_posix()
|
||||
matches.append(f"unity://path/{rel}")
|
||||
if len(matches) >= max(1, limit):
|
||||
break
|
||||
|
||||
# Always include the canonical spec resource so NL clients can discover it
|
||||
if "unity://spec/script-edits" not in matches:
|
||||
matches.append("unity://spec/script-edits")
|
||||
|
||||
return {"success": True, "data": {"uris": matches, "count": len(matches)}}
|
||||
except Exception as e:
|
||||
return {"success": False, "error": str(e)}
|
||||
|
||||
@mcp.tool(description=(
|
||||
"Read a resource by unity://path/... URI with optional slicing.\n\n"
|
||||
"Args: uri, start_line/line_count or head_bytes, tail_lines (optional), project_root, request (NL hints).\n"
|
||||
"Security: uri must resolve under Assets/.\n"
|
||||
"Examples: head_bytes=1024; start_line=100,line_count=40; tail_lines=120.\n"
|
||||
))
|
||||
async def read_resource(
|
||||
uri: str,
|
||||
ctx: Context | None = None,
|
||||
start_line: int | None = None,
|
||||
line_count: int | None = None,
|
||||
head_bytes: int | None = None,
|
||||
tail_lines: int | None = None,
|
||||
project_root: str | None = None,
|
||||
request: str | None = None,
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Reads a resource by unity://path/... URI with optional slicing.
|
||||
One of line window (start_line/line_count) or head_bytes can be used to limit size.
|
||||
"""
|
||||
try:
|
||||
# Serve the canonical spec directly when requested (allow bare or with scheme)
|
||||
if uri in ("unity://spec/script-edits", "spec/script-edits", "script-edits"):
|
||||
spec_json = (
|
||||
'{\n'
|
||||
' "name": "Unity MCP — Script Edits v1",\n'
|
||||
' "target_tool": "script_apply_edits",\n'
|
||||
' "canonical_rules": {\n'
|
||||
' "always_use": ["op","className","methodName","replacement","afterMethodName","beforeMethodName"],\n'
|
||||
' "never_use": ["new_method","anchor_method","content","newText"],\n'
|
||||
' "defaults": {\n'
|
||||
' "className": "\u2190 server will default to \'name\' when omitted",\n'
|
||||
' "position": "end"\n'
|
||||
' }\n'
|
||||
' },\n'
|
||||
' "ops": [\n'
|
||||
' {"op":"replace_method","required":["className","methodName","replacement"],"optional":["returnType","parametersSignature","attributesContains"],"examples":[{"note":"match overload by signature","parametersSignature":"(int a, string b)"},{"note":"ensure attributes retained","attributesContains":"ContextMenu"}]},\n'
|
||||
' {"op":"insert_method","required":["className","replacement"],"position":{"enum":["start","end","after","before"],"after_requires":"afterMethodName","before_requires":"beforeMethodName"}},\n'
|
||||
' {"op":"delete_method","required":["className","methodName"]},\n'
|
||||
' {"op":"anchor_insert","required":["anchor","text"],"notes":"regex; position=before|after"}\n'
|
||||
' ],\n'
|
||||
' "apply_text_edits_recipe": {\n'
|
||||
' "step1_read": { "tool": "resources/read", "args": {"uri": "unity://path/Assets/Scripts/Interaction/SmartReach.cs"} },\n'
|
||||
' "step2_apply": {\n'
|
||||
' "tool": "manage_script",\n'
|
||||
' "args": {\n'
|
||||
' "action": "apply_text_edits",\n'
|
||||
' "name": "SmartReach", "path": "Assets/Scripts/Interaction",\n'
|
||||
' "edits": [{"startLine": 42, "startCol": 1, "endLine": 42, "endCol": 1, "newText": "[MyAttr]\\n"}],\n'
|
||||
' "precondition_sha256": "<sha-from-step1>",\n'
|
||||
' "options": {"refresh": "immediate", "validate": "standard"}\n'
|
||||
' }\n'
|
||||
' },\n'
|
||||
' "note": "newText is for apply_text_edits ranges only; use replacement in script_apply_edits ops."\n'
|
||||
' },\n'
|
||||
' "examples": [\n'
|
||||
' {\n'
|
||||
' "title": "Replace a method",\n'
|
||||
' "args": {\n'
|
||||
' "name": "SmartReach",\n'
|
||||
' "path": "Assets/Scripts/Interaction",\n'
|
||||
' "edits": [\n'
|
||||
' {"op":"replace_method","className":"SmartReach","methodName":"HasTarget","replacement":"public bool HasTarget() { return currentTarget != null; }"}\n'
|
||||
' ],\n'
|
||||
' "options": { "validate": "standard", "refresh": "immediate" }\n'
|
||||
' }\n'
|
||||
' },\n'
|
||||
' {\n'
|
||||
' "title": "Insert a method after another",\n'
|
||||
' "args": {\n'
|
||||
' "name": "SmartReach",\n'
|
||||
' "path": "Assets/Scripts/Interaction",\n'
|
||||
' "edits": [\n'
|
||||
' {"op":"insert_method","className":"SmartReach","replacement":"public void PrintSeries() { Debug.Log(seriesName); }","position":"after","afterMethodName":"GetCurrentTarget"}\n'
|
||||
' ]\n'
|
||||
' }\n'
|
||||
' }\n'
|
||||
' ]\n'
|
||||
'}\n'
|
||||
)
|
||||
sha = hashlib.sha256(spec_json.encode("utf-8")).hexdigest()
|
||||
return {"success": True, "data": {"text": spec_json, "metadata": {"sha256": sha}}}
|
||||
|
||||
project = _resolve_project_root(project_root)
|
||||
p = _resolve_safe_path_from_uri(uri, project)
|
||||
if not p or not p.exists() or not p.is_file():
|
||||
return {"success": False, "error": f"Resource not found: {uri}"}
|
||||
try:
|
||||
p.relative_to(project / "Assets")
|
||||
except ValueError:
|
||||
return {"success": False, "error": "Read restricted to Assets/"}
|
||||
# Natural-language convenience: request like "last 120 lines", "first 200 lines",
|
||||
# "show 40 lines around MethodName", etc.
|
||||
if request:
|
||||
req = request.strip().lower()
|
||||
m = re.search(r"last\s+(\d+)\s+lines", req)
|
||||
if m:
|
||||
tail_lines = int(m.group(1))
|
||||
m = re.search(r"first\s+(\d+)\s+lines", req)
|
||||
if m:
|
||||
start_line = 1
|
||||
line_count = int(m.group(1))
|
||||
m = re.search(r"first\s+(\d+)\s*bytes", req)
|
||||
if m:
|
||||
head_bytes = int(m.group(1))
|
||||
m = re.search(r"show\s+(\d+)\s+lines\s+around\s+([A-Za-z_][A-Za-z0-9_]*)", req)
|
||||
if m:
|
||||
window = int(m.group(1))
|
||||
method = m.group(2)
|
||||
# naive search for method header to get a line number
|
||||
text_all = p.read_text(encoding="utf-8")
|
||||
lines_all = text_all.splitlines()
|
||||
pat = re.compile(rf"^\s*(?:\[[^\]]+\]\s*)*(?:public|private|protected|internal|static|virtual|override|sealed|async|extern|unsafe|new|partial).*?\b{re.escape(method)}\s*\(", re.MULTILINE)
|
||||
hit_line = None
|
||||
for i, line in enumerate(lines_all, start=1):
|
||||
if pat.search(line):
|
||||
hit_line = i
|
||||
break
|
||||
if hit_line:
|
||||
half = max(1, window // 2)
|
||||
start_line = max(1, hit_line - half)
|
||||
line_count = window
|
||||
|
||||
# Mutually exclusive windowing options precedence:
|
||||
# 1) head_bytes, 2) tail_lines, 3) start_line+line_count, else full text
|
||||
if head_bytes and head_bytes > 0:
|
||||
raw = p.read_bytes()[: head_bytes]
|
||||
text = raw.decode("utf-8", errors="replace")
|
||||
else:
|
||||
text = p.read_text(encoding="utf-8")
|
||||
if tail_lines is not None and tail_lines > 0:
|
||||
lines = text.splitlines()
|
||||
n = max(0, tail_lines)
|
||||
text = "\n".join(lines[-n:])
|
||||
elif start_line is not None and line_count is not None and line_count >= 0:
|
||||
lines = text.splitlines()
|
||||
s = max(0, start_line - 1)
|
||||
e = min(len(lines), s + line_count)
|
||||
text = "\n".join(lines[s:e])
|
||||
|
||||
sha = hashlib.sha256(text.encode("utf-8")).hexdigest()
|
||||
return {"success": True, "data": {"text": text, "metadata": {"sha256": sha}}}
|
||||
except Exception as e:
|
||||
return {"success": False, "error": str(e)}
|
||||
|
||||
@mcp.tool()
|
||||
async def find_in_file(
|
||||
uri: str,
|
||||
pattern: str,
|
||||
ctx: Context | None = None,
|
||||
ignore_case: bool | None = True,
|
||||
project_root: str | None = None,
|
||||
max_results: int | None = 200,
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Searches a file with a regex pattern and returns line numbers and excerpts.
|
||||
- uri: unity://path/Assets/... or file path form supported by read_resource
|
||||
- pattern: regular expression (Python re)
|
||||
- ignore_case: case-insensitive by default
|
||||
- max_results: cap results to avoid huge payloads
|
||||
"""
|
||||
# re is already imported at module level
|
||||
try:
|
||||
project = _resolve_project_root(project_root)
|
||||
p = _resolve_safe_path_from_uri(uri, project)
|
||||
if not p or not p.exists() or not p.is_file():
|
||||
return {"success": False, "error": f"Resource not found: {uri}"}
|
||||
|
||||
text = p.read_text(encoding="utf-8")
|
||||
flags = re.MULTILINE
|
||||
if ignore_case:
|
||||
flags |= re.IGNORECASE
|
||||
rx = re.compile(pattern, flags)
|
||||
|
||||
results = []
|
||||
lines = text.splitlines()
|
||||
for i, line in enumerate(lines, start=1):
|
||||
if rx.search(line):
|
||||
results.append({"line": i, "text": line})
|
||||
if max_results and len(results) >= max_results:
|
||||
break
|
||||
|
||||
return {"success": True, "data": {"matches": results, "count": len(results)}}
|
||||
except Exception as e:
|
||||
return {"success": False, "error": str(e)}
|
||||
|
||||
|
||||
|
|
@ -1,12 +1,15 @@
|
|||
import socket
|
||||
import contextlib
|
||||
import errno
|
||||
import json
|
||||
import logging
|
||||
import random
|
||||
import socket
|
||||
import struct
|
||||
import threading
|
||||
import time
|
||||
from dataclasses import dataclass
|
||||
from pathlib import Path
|
||||
import time
|
||||
import random
|
||||
import errno
|
||||
from typing import Dict, Any
|
||||
from typing import Any, Dict
|
||||
from config import config
|
||||
from port_discovery import PortDiscovery
|
||||
|
||||
|
|
@ -17,31 +20,86 @@ logging.basicConfig(
|
|||
)
|
||||
logger = logging.getLogger("mcp-for-unity-server")
|
||||
|
||||
# Module-level lock to guard global connection initialization
|
||||
_connection_lock = threading.Lock()
|
||||
|
||||
# Maximum allowed framed payload size (64 MiB)
|
||||
FRAMED_MAX = 64 * 1024 * 1024
|
||||
|
||||
@dataclass
|
||||
class UnityConnection:
|
||||
"""Manages the socket connection to the Unity Editor."""
|
||||
host: str = config.unity_host
|
||||
port: int = None # Will be set dynamically
|
||||
sock: socket.socket = None # Socket for Unity communication
|
||||
use_framing: bool = False # Negotiated per-connection
|
||||
|
||||
def __post_init__(self):
|
||||
"""Set port from discovery if not explicitly provided"""
|
||||
if self.port is None:
|
||||
self.port = PortDiscovery.discover_unity_port()
|
||||
self._io_lock = threading.Lock()
|
||||
self._conn_lock = threading.Lock()
|
||||
|
||||
def connect(self) -> bool:
|
||||
"""Establish a connection to the Unity Editor."""
|
||||
if self.sock:
|
||||
return True
|
||||
try:
|
||||
self.sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
|
||||
self.sock.connect((self.host, self.port))
|
||||
logger.info(f"Connected to Unity at {self.host}:{self.port}")
|
||||
return True
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to connect to Unity: {str(e)}")
|
||||
self.sock = None
|
||||
return False
|
||||
with self._conn_lock:
|
||||
if self.sock:
|
||||
return True
|
||||
try:
|
||||
# Bounded connect to avoid indefinite blocking
|
||||
connect_timeout = float(getattr(config, "connect_timeout", getattr(config, "connection_timeout", 1.0)))
|
||||
self.sock = socket.create_connection((self.host, self.port), connect_timeout)
|
||||
# Disable Nagle's algorithm to reduce small RPC latency
|
||||
with contextlib.suppress(Exception):
|
||||
self.sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
|
||||
logger.debug(f"Connected to Unity at {self.host}:{self.port}")
|
||||
|
||||
# Strict handshake: require FRAMING=1
|
||||
try:
|
||||
require_framing = getattr(config, "require_framing", True)
|
||||
timeout = float(getattr(config, "handshake_timeout", 1.0))
|
||||
self.sock.settimeout(timeout)
|
||||
buf = bytearray()
|
||||
deadline = time.monotonic() + timeout
|
||||
while time.monotonic() < deadline and len(buf) < 512:
|
||||
try:
|
||||
chunk = self.sock.recv(256)
|
||||
if not chunk:
|
||||
break
|
||||
buf.extend(chunk)
|
||||
if b"\n" in buf:
|
||||
break
|
||||
except socket.timeout:
|
||||
break
|
||||
text = bytes(buf).decode('ascii', errors='ignore').strip()
|
||||
|
||||
if 'FRAMING=1' in text:
|
||||
self.use_framing = True
|
||||
logger.debug('Unity MCP handshake received: FRAMING=1 (strict)')
|
||||
else:
|
||||
if require_framing:
|
||||
# Best-effort plain-text advisory for legacy peers
|
||||
with contextlib.suppress(Exception):
|
||||
self.sock.sendall(b'Unity MCP requires FRAMING=1\n')
|
||||
raise ConnectionError(f'Unity MCP requires FRAMING=1, got: {text!r}')
|
||||
else:
|
||||
self.use_framing = False
|
||||
logger.warning('Unity MCP handshake missing FRAMING=1; proceeding in legacy mode by configuration')
|
||||
finally:
|
||||
self.sock.settimeout(config.connection_timeout)
|
||||
return True
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to connect to Unity: {str(e)}")
|
||||
try:
|
||||
if self.sock:
|
||||
self.sock.close()
|
||||
except Exception:
|
||||
pass
|
||||
self.sock = None
|
||||
return False
|
||||
|
||||
def disconnect(self):
|
||||
"""Close the connection to the Unity Editor."""
|
||||
|
|
@ -53,10 +111,48 @@ class UnityConnection:
|
|||
finally:
|
||||
self.sock = None
|
||||
|
||||
def _read_exact(self, sock: socket.socket, count: int) -> bytes:
|
||||
data = bytearray()
|
||||
while len(data) < count:
|
||||
chunk = sock.recv(count - len(data))
|
||||
if not chunk:
|
||||
raise ConnectionError("Connection closed before reading expected bytes")
|
||||
data.extend(chunk)
|
||||
return bytes(data)
|
||||
|
||||
def receive_full_response(self, sock, buffer_size=config.buffer_size) -> bytes:
|
||||
"""Receive a complete response from Unity, handling chunked data."""
|
||||
if self.use_framing:
|
||||
try:
|
||||
# Consume heartbeats, but do not hang indefinitely if only zero-length frames arrive
|
||||
heartbeat_count = 0
|
||||
deadline = time.monotonic() + getattr(config, 'framed_receive_timeout', 2.0)
|
||||
while True:
|
||||
header = self._read_exact(sock, 8)
|
||||
payload_len = struct.unpack('>Q', header)[0]
|
||||
if payload_len == 0:
|
||||
# Heartbeat/no-op frame: consume and continue waiting for a data frame
|
||||
logger.debug("Received heartbeat frame (length=0)")
|
||||
heartbeat_count += 1
|
||||
if heartbeat_count >= getattr(config, 'max_heartbeat_frames', 16) or time.monotonic() > deadline:
|
||||
# Treat as empty successful response to match C# server behavior
|
||||
logger.debug("Heartbeat threshold reached; returning empty response")
|
||||
return b""
|
||||
continue
|
||||
if payload_len > FRAMED_MAX:
|
||||
raise ValueError(f"Invalid framed length: {payload_len}")
|
||||
payload = self._read_exact(sock, payload_len)
|
||||
logger.debug(f"Received framed response ({len(payload)} bytes)")
|
||||
return payload
|
||||
except socket.timeout as e:
|
||||
logger.warning("Socket timeout during framed receive")
|
||||
raise TimeoutError("Timeout receiving Unity response") from e
|
||||
except Exception as e:
|
||||
logger.error(f"Error during framed receive: {str(e)}")
|
||||
raise
|
||||
|
||||
chunks = []
|
||||
sock.settimeout(config.connection_timeout) # Use timeout from config
|
||||
# Respect the socket's currently configured timeout
|
||||
try:
|
||||
while True:
|
||||
chunk = sock.recv(buffer_size)
|
||||
|
|
@ -148,15 +244,9 @@ class UnityConnection:
|
|||
|
||||
for attempt in range(attempts + 1):
|
||||
try:
|
||||
# Ensure connected
|
||||
if not self.sock:
|
||||
# During retries use short connect timeout
|
||||
self.sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
|
||||
self.sock.settimeout(1.0)
|
||||
self.sock.connect((self.host, self.port))
|
||||
# restore steady-state timeout for receive
|
||||
self.sock.settimeout(config.connection_timeout)
|
||||
logger.info(f"Connected to Unity at {self.host}:{self.port}")
|
||||
# Ensure connected (handshake occurs within connect())
|
||||
if not self.sock and not self.connect():
|
||||
raise Exception("Could not connect to Unity")
|
||||
|
||||
# Build payload
|
||||
if command_type == 'ping':
|
||||
|
|
@ -165,18 +255,36 @@ class UnityConnection:
|
|||
command = {"type": command_type, "params": params or {}}
|
||||
payload = json.dumps(command, ensure_ascii=False).encode('utf-8')
|
||||
|
||||
# Send
|
||||
self.sock.sendall(payload)
|
||||
# Send/receive are serialized to protect the shared socket
|
||||
with self._io_lock:
|
||||
mode = 'framed' if self.use_framing else 'legacy'
|
||||
with contextlib.suppress(Exception):
|
||||
logger.debug(
|
||||
"send %d bytes; mode=%s; head=%s",
|
||||
len(payload),
|
||||
mode,
|
||||
(payload[:32]).decode('utf-8', 'ignore'),
|
||||
)
|
||||
if self.use_framing:
|
||||
header = struct.pack('>Q', len(payload))
|
||||
self.sock.sendall(header)
|
||||
self.sock.sendall(payload)
|
||||
else:
|
||||
self.sock.sendall(payload)
|
||||
|
||||
# During retry bursts use a short receive timeout
|
||||
if attempt > 0 and last_short_timeout is None:
|
||||
last_short_timeout = self.sock.gettimeout()
|
||||
self.sock.settimeout(1.0)
|
||||
response_data = self.receive_full_response(self.sock)
|
||||
# restore steady-state timeout if changed
|
||||
if last_short_timeout is not None:
|
||||
self.sock.settimeout(config.connection_timeout)
|
||||
last_short_timeout = None
|
||||
# During retry bursts use a short receive timeout and ensure restoration
|
||||
restore_timeout = None
|
||||
if attempt > 0 and last_short_timeout is None:
|
||||
restore_timeout = self.sock.gettimeout()
|
||||
self.sock.settimeout(1.0)
|
||||
try:
|
||||
response_data = self.receive_full_response(self.sock)
|
||||
with contextlib.suppress(Exception):
|
||||
logger.debug("recv %d bytes; mode=%s", len(response_data), mode)
|
||||
finally:
|
||||
if restore_timeout is not None:
|
||||
self.sock.settimeout(restore_timeout)
|
||||
last_short_timeout = None
|
||||
|
||||
# Parse
|
||||
if command_type == 'ping':
|
||||
|
|
@ -241,43 +349,26 @@ class UnityConnection:
|
|||
_unity_connection = None
|
||||
|
||||
def get_unity_connection() -> UnityConnection:
|
||||
"""Retrieve or establish a persistent Unity connection."""
|
||||
"""Retrieve or establish a persistent Unity connection.
|
||||
|
||||
Note: Do NOT ping on every retrieval to avoid connection storms. Rely on
|
||||
send_command() exceptions to detect broken sockets and reconnect there.
|
||||
"""
|
||||
global _unity_connection
|
||||
if _unity_connection is not None:
|
||||
try:
|
||||
# Try to ping with a short timeout to verify connection
|
||||
result = _unity_connection.send_command("ping")
|
||||
# If we get here, the connection is still valid
|
||||
logger.debug("Reusing existing Unity connection")
|
||||
return _unity_connection
|
||||
except Exception as e:
|
||||
logger.warning(f"Existing connection failed: {str(e)}")
|
||||
try:
|
||||
_unity_connection.disconnect()
|
||||
except:
|
||||
pass
|
||||
_unity_connection = None
|
||||
|
||||
# Create a new connection
|
||||
logger.info("Creating new Unity connection")
|
||||
_unity_connection = UnityConnection()
|
||||
if not _unity_connection.connect():
|
||||
_unity_connection = None
|
||||
raise ConnectionError("Could not connect to Unity. Ensure the Unity Editor and MCP Bridge are running.")
|
||||
|
||||
try:
|
||||
# Verify the new connection works
|
||||
_unity_connection.send_command("ping")
|
||||
logger.info("Successfully established new Unity connection")
|
||||
return _unity_connection
|
||||
except Exception as e:
|
||||
logger.error(f"Could not verify new connection: {str(e)}")
|
||||
try:
|
||||
_unity_connection.disconnect()
|
||||
except:
|
||||
pass
|
||||
_unity_connection = None
|
||||
raise ConnectionError(f"Could not establish valid Unity connection: {str(e)}")
|
||||
|
||||
# Double-checked locking to avoid concurrent socket creation
|
||||
with _connection_lock:
|
||||
if _unity_connection is not None:
|
||||
return _unity_connection
|
||||
logger.info("Creating new Unity connection")
|
||||
_unity_connection = UnityConnection()
|
||||
if not _unity_connection.connect():
|
||||
_unity_connection = None
|
||||
raise ConnectionError("Could not connect to Unity. Ensure the Unity Editor and MCP Bridge are running.")
|
||||
logger.info("Connected to Unity on startup")
|
||||
return _unity_connection
|
||||
|
||||
|
||||
# -----------------------------
|
||||
|
|
|
|||
|
|
@ -160,6 +160,21 @@ cli = [
|
|||
{ name = "typer" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "mcpforunityserver"
|
||||
version = "3.0.2"
|
||||
source = { editable = "." }
|
||||
dependencies = [
|
||||
{ name = "httpx" },
|
||||
{ name = "mcp", extra = ["cli"] },
|
||||
]
|
||||
|
||||
[package.metadata]
|
||||
requires-dist = [
|
||||
{ name = "httpx", specifier = ">=0.27.2" },
|
||||
{ name = "mcp", extras = ["cli"], specifier = ">=1.4.1" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "mdurl"
|
||||
version = "0.1.2"
|
||||
|
|
@ -370,21 +385,6 @@ wheels = [
|
|||
{ url = "https://files.pythonhosted.org/packages/26/9f/ad63fc0248c5379346306f8668cda6e2e2e9c95e01216d2b8ffd9ff037d0/typing_extensions-4.12.2-py3-none-any.whl", hash = "sha256:04e5ca0351e0f3f85c6853954072df659d0d13fac324d0072316b67d7794700d", size = 37438 },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "mcpforunityserver"
|
||||
version = "2.1.2"
|
||||
source = { editable = "." }
|
||||
dependencies = [
|
||||
{ name = "httpx" },
|
||||
{ name = "mcp", extra = ["cli"] },
|
||||
]
|
||||
|
||||
[package.metadata]
|
||||
requires-dist = [
|
||||
{ name = "httpx", specifier = ">=0.27.2" },
|
||||
{ name = "mcp", extras = ["cli"], specifier = ">=1.4.1" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "uvicorn"
|
||||
version = "0.34.0"
|
||||
|
|
|
|||
|
|
@ -1,51 +0,0 @@
|
|||
### macOS: Claude CLI fails to start (dyld ICU library not loaded)
|
||||
|
||||
- Symptoms
|
||||
- MCP for Unity error: “Failed to start Claude CLI. dyld: Library not loaded: /usr/local/opt/icu4c/lib/libicui18n.71.dylib …”
|
||||
- Running `claude` in Terminal fails with missing `libicui18n.xx.dylib`.
|
||||
|
||||
- Cause
|
||||
- Homebrew Node (or the `claude` binary) was linked against an ICU version that’s no longer installed; dyld can’t find that dylib.
|
||||
|
||||
- Fix options (pick one)
|
||||
- Reinstall Homebrew Node (relinks to current ICU), then reinstall CLI:
|
||||
```bash
|
||||
brew update
|
||||
brew reinstall node
|
||||
npm uninstall -g @anthropic-ai/claude-code
|
||||
npm install -g @anthropic-ai/claude-code
|
||||
```
|
||||
- Use NVM Node (avoids Homebrew ICU churn):
|
||||
```bash
|
||||
nvm install --lts
|
||||
nvm use --lts
|
||||
npm install -g @anthropic-ai/claude-code
|
||||
# MCP for Unity → Claude Code → Choose Claude Location → ~/.nvm/versions/node/<ver>/bin/claude
|
||||
```
|
||||
- Use the native installer (puts claude in a stable path):
|
||||
```bash
|
||||
# macOS/Linux
|
||||
curl -fsSL https://claude.ai/install.sh | bash
|
||||
# MCP for Unity → Claude Code → Choose Claude Location → /opt/homebrew/bin/claude or ~/.local/bin/claude
|
||||
```
|
||||
|
||||
- After fixing
|
||||
- In MCP for Unity (Claude Code), click “Choose Claude Location” and select the working `claude` binary, then Register again.
|
||||
|
||||
- More details
|
||||
- See: Troubleshooting MCP for Unity and Claude Code
|
||||
|
||||
---
|
||||
|
||||
### FAQ (Claude Code)
|
||||
|
||||
- Q: Unity can’t find `claude` even though Terminal can.
|
||||
- A: macOS apps launched from Finder/Hub don’t inherit your shell PATH. In the MCP for Unity window, click “Choose Claude Location” and select the absolute path (e.g., `/opt/homebrew/bin/claude` or `~/.nvm/versions/node/<ver>/bin/claude`).
|
||||
|
||||
- Q: I installed via NVM; where is `claude`?
|
||||
- A: Typically `~/.nvm/versions/node/<ver>/bin/claude`. Our UI also scans NVM versions and you can browse to it via “Choose Claude Location”.
|
||||
|
||||
- Q: The Register button says “Claude Not Found”.
|
||||
- A: Install the CLI or set the path. Click the orange “[HELP]” link in the MCP for Unity window for step‑by‑step install instructions, then choose the binary location.
|
||||
|
||||
|
||||
|
|
@ -0,0 +1,98 @@
|
|||
#!/usr/bin/env python3
|
||||
import socket, struct, json, sys
|
||||
|
||||
HOST = "127.0.0.1"
|
||||
PORT = 6400
|
||||
try:
|
||||
SIZE_MB = int(sys.argv[1])
|
||||
except (IndexError, ValueError):
|
||||
SIZE_MB = 5 # e.g., 5 or 10
|
||||
FILL = "R"
|
||||
MAX_FRAME = 64 * 1024 * 1024
|
||||
|
||||
def recv_exact(sock, n):
|
||||
buf = bytearray(n)
|
||||
view = memoryview(buf)
|
||||
off = 0
|
||||
while off < n:
|
||||
r = sock.recv_into(view[off:])
|
||||
if r == 0:
|
||||
raise RuntimeError("socket closed")
|
||||
off += r
|
||||
return bytes(buf)
|
||||
|
||||
def is_valid_json(b):
|
||||
try:
|
||||
json.loads(b.decode("utf-8"))
|
||||
return True
|
||||
except Exception:
|
||||
return False
|
||||
|
||||
def recv_legacy_json(sock, timeout=60):
|
||||
sock.settimeout(timeout)
|
||||
chunks = []
|
||||
while True:
|
||||
chunk = sock.recv(65536)
|
||||
if not chunk:
|
||||
data = b"".join(chunks)
|
||||
if not data:
|
||||
raise RuntimeError("no data, socket closed")
|
||||
return data
|
||||
chunks.append(chunk)
|
||||
data = b"".join(chunks)
|
||||
if data.strip() == b"ping":
|
||||
return data
|
||||
if is_valid_json(data):
|
||||
return data
|
||||
|
||||
def main():
|
||||
# Cap filler to stay within framing limit (reserve small overhead for JSON)
|
||||
safe_max = max(1, MAX_FRAME - 4096)
|
||||
filler_len = min(SIZE_MB * 1024 * 1024, safe_max)
|
||||
body = {
|
||||
"type": "read_console",
|
||||
"params": {
|
||||
"action": "get",
|
||||
"types": ["all"],
|
||||
"count": 1000,
|
||||
"format": "detailed",
|
||||
"includeStacktrace": True,
|
||||
"filterText": FILL * filler_len
|
||||
}
|
||||
}
|
||||
body_bytes = json.dumps(body, ensure_ascii=False).encode("utf-8")
|
||||
|
||||
with socket.create_connection((HOST, PORT), timeout=5) as s:
|
||||
s.settimeout(2)
|
||||
# Read optional greeting
|
||||
try:
|
||||
greeting = s.recv(256)
|
||||
except Exception:
|
||||
greeting = b""
|
||||
greeting_text = greeting.decode("ascii", errors="ignore").strip()
|
||||
print(f"Greeting: {greeting_text or '(none)'}")
|
||||
|
||||
framing = "FRAMING=1" in greeting_text
|
||||
print(f"Using framing? {framing}")
|
||||
|
||||
s.settimeout(120)
|
||||
if framing:
|
||||
header = struct.pack(">Q", len(body_bytes))
|
||||
s.sendall(header + body_bytes)
|
||||
resp_len = struct.unpack(">Q", recv_exact(s, 8))[0]
|
||||
print(f"Response framed length: {resp_len}")
|
||||
MAX_RESP = MAX_FRAME
|
||||
if resp_len <= 0 or resp_len > MAX_RESP:
|
||||
raise RuntimeError(f"invalid framed length: {resp_len} (max {MAX_RESP})")
|
||||
resp = recv_exact(s, resp_len)
|
||||
else:
|
||||
s.sendall(body_bytes)
|
||||
resp = recv_legacy_json(s)
|
||||
|
||||
print(f"Response bytes: {len(resp)}")
|
||||
print(f"Response head: {resp[:120].decode('utf-8','ignore')}")
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
|
||||
|
||||
|
|
@ -0,0 +1,116 @@
|
|||
import sys
|
||||
import pathlib
|
||||
import importlib.util
|
||||
import types
|
||||
|
||||
|
||||
ROOT = pathlib.Path(__file__).resolve().parents[1]
|
||||
SRC = ROOT / "UnityMcpBridge" / "UnityMcpServer~" / "src"
|
||||
sys.path.insert(0, str(SRC))
|
||||
|
||||
# stub mcp.server.fastmcp
|
||||
mcp_pkg = types.ModuleType("mcp")
|
||||
server_pkg = types.ModuleType("mcp.server")
|
||||
fastmcp_pkg = types.ModuleType("mcp.server.fastmcp")
|
||||
class _Dummy: pass
|
||||
fastmcp_pkg.FastMCP = _Dummy
|
||||
fastmcp_pkg.Context = _Dummy
|
||||
server_pkg.fastmcp = fastmcp_pkg
|
||||
mcp_pkg.server = server_pkg
|
||||
sys.modules.setdefault("mcp", mcp_pkg)
|
||||
sys.modules.setdefault("mcp.server", server_pkg)
|
||||
sys.modules.setdefault("mcp.server.fastmcp", fastmcp_pkg)
|
||||
|
||||
def _load(path: pathlib.Path, name: str):
|
||||
spec = importlib.util.spec_from_file_location(name, path)
|
||||
mod = importlib.util.module_from_spec(spec)
|
||||
spec.loader.exec_module(mod)
|
||||
return mod
|
||||
|
||||
manage_script = _load(SRC / "tools" / "manage_script.py", "manage_script_mod2")
|
||||
manage_script_edits = _load(SRC / "tools" / "manage_script_edits.py", "manage_script_edits_mod2")
|
||||
|
||||
|
||||
class DummyMCP:
|
||||
def __init__(self): self.tools = {}
|
||||
def tool(self, *args, **kwargs):
|
||||
def deco(fn): self.tools[fn.__name__] = fn; return fn
|
||||
return deco
|
||||
|
||||
def setup_tools():
|
||||
mcp = DummyMCP()
|
||||
manage_script.register_manage_script_tools(mcp)
|
||||
return mcp.tools
|
||||
|
||||
|
||||
def test_normalizes_lsp_and_index_ranges(monkeypatch):
|
||||
tools = setup_tools()
|
||||
apply = tools["apply_text_edits"]
|
||||
calls = []
|
||||
|
||||
def fake_send(cmd, params):
|
||||
calls.append(params)
|
||||
return {"success": True}
|
||||
|
||||
monkeypatch.setattr(manage_script, "send_command_with_retry", fake_send)
|
||||
|
||||
# LSP-style
|
||||
edits = [{
|
||||
"range": {"start": {"line": 10, "character": 2}, "end": {"line": 10, "character": 2}},
|
||||
"newText": "// lsp\n"
|
||||
}]
|
||||
apply(None, uri="unity://path/Assets/Scripts/F.cs", edits=edits, precondition_sha256="x")
|
||||
p = calls[-1]
|
||||
e = p["edits"][0]
|
||||
assert e["startLine"] == 11 and e["startCol"] == 3
|
||||
|
||||
# Index pair
|
||||
calls.clear()
|
||||
edits = [{"range": [0, 0], "text": "// idx\n"}]
|
||||
# fake read to provide contents length
|
||||
def fake_read(cmd, params):
|
||||
if params.get("action") == "read":
|
||||
return {"success": True, "data": {"contents": "hello\n"}}
|
||||
return {"success": True}
|
||||
monkeypatch.setattr(manage_script, "send_command_with_retry", fake_read)
|
||||
apply(None, uri="unity://path/Assets/Scripts/F.cs", edits=edits, precondition_sha256="x")
|
||||
# last call is apply_text_edits
|
||||
|
||||
|
||||
def test_noop_evidence_shape(monkeypatch):
|
||||
tools = setup_tools()
|
||||
apply = tools["apply_text_edits"]
|
||||
# Route response from Unity indicating no-op
|
||||
def fake_send(cmd, params):
|
||||
return {"success": True, "data": {"no_op": True, "evidence": {"reason": "identical_content"}}}
|
||||
monkeypatch.setattr(manage_script, "send_command_with_retry", fake_send)
|
||||
|
||||
resp = apply(None, uri="unity://path/Assets/Scripts/F.cs", edits=[{"startLine":1,"startCol":1,"endLine":1,"endCol":1,"newText":""}], precondition_sha256="x")
|
||||
assert resp["success"] is True
|
||||
assert resp.get("data", {}).get("no_op") is True
|
||||
|
||||
|
||||
def test_atomic_multi_span_and_relaxed(monkeypatch):
|
||||
tools_text = setup_tools()
|
||||
apply_text = tools_text["apply_text_edits"]
|
||||
tools_struct = DummyMCP(); manage_script_edits.register_manage_script_edits_tools(tools_struct)
|
||||
# Fake send for read and write; verify atomic applyMode and validate=relaxed passes through
|
||||
sent = {}
|
||||
def fake_send(cmd, params):
|
||||
if params.get("action") == "read":
|
||||
return {"success": True, "data": {"contents": "public class C{\nvoid M(){ int x=2; }\n}\n"}}
|
||||
sent.setdefault("calls", []).append(params)
|
||||
return {"success": True}
|
||||
monkeypatch.setattr(manage_script, "send_command_with_retry", fake_send)
|
||||
|
||||
edits = [
|
||||
{"startLine": 2, "startCol": 14, "endLine": 2, "endCol": 15, "newText": "3"},
|
||||
{"startLine": 3, "startCol": 2, "endLine": 3, "endCol": 2, "newText": "// tail\n"}
|
||||
]
|
||||
resp = apply_text(None, uri="unity://path/Assets/Scripts/C.cs", edits=edits, precondition_sha256="sha", options={"validate": "relaxed", "applyMode": "atomic"})
|
||||
assert resp["success"] is True
|
||||
# Last manage_script call should include options with applyMode atomic and validate relaxed
|
||||
last = sent["calls"][-1]
|
||||
assert last.get("options", {}).get("applyMode") == "atomic"
|
||||
assert last.get("options", {}).get("validate") == "relaxed"
|
||||
|
||||
|
|
@ -0,0 +1,84 @@
|
|||
import sys
|
||||
import pathlib
|
||||
import importlib.util
|
||||
import types
|
||||
|
||||
|
||||
ROOT = pathlib.Path(__file__).resolve().parents[1]
|
||||
SRC = ROOT / "UnityMcpBridge" / "UnityMcpServer~" / "src"
|
||||
sys.path.insert(0, str(SRC))
|
||||
|
||||
# stub mcp.server.fastmcp
|
||||
mcp_pkg = types.ModuleType("mcp")
|
||||
server_pkg = types.ModuleType("mcp.server")
|
||||
fastmcp_pkg = types.ModuleType("mcp.server.fastmcp")
|
||||
class _Dummy: pass
|
||||
fastmcp_pkg.FastMCP = _Dummy
|
||||
fastmcp_pkg.Context = _Dummy
|
||||
server_pkg.fastmcp = fastmcp_pkg
|
||||
mcp_pkg.server = server_pkg
|
||||
sys.modules.setdefault("mcp", mcp_pkg)
|
||||
sys.modules.setdefault("mcp.server", server_pkg)
|
||||
sys.modules.setdefault("mcp.server.fastmcp", fastmcp_pkg)
|
||||
|
||||
|
||||
def _load(path: pathlib.Path, name: str):
|
||||
spec = importlib.util.spec_from_file_location(name, path)
|
||||
mod = importlib.util.module_from_spec(spec)
|
||||
spec.loader.exec_module(mod)
|
||||
return mod
|
||||
|
||||
|
||||
manage_script = _load(SRC / "tools" / "manage_script.py", "manage_script_mod3")
|
||||
|
||||
|
||||
class DummyMCP:
|
||||
def __init__(self): self.tools = {}
|
||||
def tool(self, *args, **kwargs):
|
||||
def deco(fn): self.tools[fn.__name__] = fn; return fn
|
||||
return deco
|
||||
|
||||
|
||||
def setup_tools():
|
||||
mcp = DummyMCP()
|
||||
manage_script.register_manage_script_tools(mcp)
|
||||
return mcp.tools
|
||||
|
||||
|
||||
def test_explicit_zero_based_normalized_warning(monkeypatch):
|
||||
tools = setup_tools()
|
||||
apply_edits = tools["apply_text_edits"]
|
||||
|
||||
def fake_send(cmd, params):
|
||||
# Simulate Unity path returning minimal success
|
||||
return {"success": True}
|
||||
|
||||
monkeypatch.setattr(manage_script, "send_command_with_retry", fake_send)
|
||||
|
||||
# Explicit fields given as 0-based (invalid); SDK should normalize and warn
|
||||
edits = [{"startLine": 0, "startCol": 0, "endLine": 0, "endCol": 0, "newText": "//x"}]
|
||||
resp = apply_edits(None, uri="unity://path/Assets/Scripts/F.cs", edits=edits, precondition_sha256="sha")
|
||||
|
||||
assert resp["success"] is True
|
||||
data = resp.get("data", {})
|
||||
assert "normalizedEdits" in data
|
||||
assert any(w == "zero_based_explicit_fields_normalized" for w in data.get("warnings", []))
|
||||
ne = data["normalizedEdits"][0]
|
||||
assert ne["startLine"] == 1 and ne["startCol"] == 1 and ne["endLine"] == 1 and ne["endCol"] == 1
|
||||
|
||||
|
||||
def test_strict_zero_based_error(monkeypatch):
|
||||
tools = setup_tools()
|
||||
apply_edits = tools["apply_text_edits"]
|
||||
|
||||
def fake_send(cmd, params):
|
||||
return {"success": True}
|
||||
|
||||
monkeypatch.setattr(manage_script, "send_command_with_retry", fake_send)
|
||||
|
||||
edits = [{"startLine": 0, "startCol": 0, "endLine": 0, "endCol": 0, "newText": "//x"}]
|
||||
resp = apply_edits(None, uri="unity://path/Assets/Scripts/F.cs", edits=edits, precondition_sha256="sha", strict=True)
|
||||
assert resp["success"] is False
|
||||
assert resp.get("code") == "zero_based_explicit_fields"
|
||||
|
||||
|
||||
|
|
@ -0,0 +1,74 @@
|
|||
import sys
|
||||
import pathlib
|
||||
import importlib.util
|
||||
import types
|
||||
|
||||
|
||||
ROOT = pathlib.Path(__file__).resolve().parents[1]
|
||||
SRC = ROOT / "UnityMcpBridge" / "UnityMcpServer~" / "src"
|
||||
sys.path.insert(0, str(SRC))
|
||||
|
||||
# stub mcp.server.fastmcp to satisfy imports without full dependency
|
||||
mcp_pkg = types.ModuleType("mcp")
|
||||
server_pkg = types.ModuleType("mcp.server")
|
||||
fastmcp_pkg = types.ModuleType("mcp.server.fastmcp")
|
||||
|
||||
class _Dummy:
|
||||
pass
|
||||
|
||||
fastmcp_pkg.FastMCP = _Dummy
|
||||
fastmcp_pkg.Context = _Dummy
|
||||
server_pkg.fastmcp = fastmcp_pkg
|
||||
mcp_pkg.server = server_pkg
|
||||
sys.modules.setdefault("mcp", mcp_pkg)
|
||||
sys.modules.setdefault("mcp.server", server_pkg)
|
||||
sys.modules.setdefault("mcp.server.fastmcp", fastmcp_pkg)
|
||||
|
||||
|
||||
def _load_module(path: pathlib.Path, name: str):
|
||||
spec = importlib.util.spec_from_file_location(name, path)
|
||||
mod = importlib.util.module_from_spec(spec)
|
||||
spec.loader.exec_module(mod)
|
||||
return mod
|
||||
|
||||
|
||||
manage_script = _load_module(SRC / "tools" / "manage_script.py", "manage_script_mod")
|
||||
|
||||
|
||||
class DummyMCP:
|
||||
def __init__(self):
|
||||
self.tools = {}
|
||||
|
||||
def tool(self, *args, **kwargs):
|
||||
def deco(fn):
|
||||
self.tools[fn.__name__] = fn
|
||||
return fn
|
||||
return deco
|
||||
|
||||
|
||||
def setup_tools():
|
||||
mcp = DummyMCP()
|
||||
manage_script.register_manage_script_tools(mcp)
|
||||
return mcp.tools
|
||||
|
||||
|
||||
def test_get_sha_param_shape_and_routing(monkeypatch):
|
||||
tools = setup_tools()
|
||||
get_sha = tools["get_sha"]
|
||||
|
||||
captured = {}
|
||||
|
||||
def fake_send(cmd, params):
|
||||
captured["cmd"] = cmd
|
||||
captured["params"] = params
|
||||
return {"success": True, "data": {"sha256": "abc", "lengthBytes": 1, "lastModifiedUtc": "2020-01-01T00:00:00Z", "uri": "unity://path/Assets/Scripts/A.cs", "path": "Assets/Scripts/A.cs"}}
|
||||
|
||||
monkeypatch.setattr(manage_script, "send_command_with_retry", fake_send)
|
||||
|
||||
resp = get_sha(None, uri="unity://path/Assets/Scripts/A.cs")
|
||||
assert captured["cmd"] == "manage_script"
|
||||
assert captured["params"]["action"] == "get_sha"
|
||||
assert captured["params"]["name"] == "A"
|
||||
assert captured["params"]["path"].endswith("Assets/Scripts")
|
||||
assert resp["success"] is True
|
||||
|
||||
|
|
@ -0,0 +1,68 @@
|
|||
import ast
|
||||
from pathlib import Path
|
||||
|
||||
import pytest
|
||||
|
||||
|
||||
# locate server src dynamically to avoid hardcoded layout assumptions
|
||||
ROOT = Path(__file__).resolve().parents[1]
|
||||
candidates = [
|
||||
ROOT / "UnityMcpBridge" / "UnityMcpServer~" / "src",
|
||||
ROOT / "UnityMcpServer~" / "src",
|
||||
]
|
||||
SRC = next((p for p in candidates if p.exists()), None)
|
||||
if SRC is None:
|
||||
searched = "\n".join(str(p) for p in candidates)
|
||||
pytest.skip(
|
||||
"Unity MCP server source not found. Tried:\n" + searched,
|
||||
allow_module_level=True,
|
||||
)
|
||||
|
||||
|
||||
@pytest.mark.skip(reason="TODO: ensure server logs only to stderr and rotating file")
|
||||
def test_no_stdout_output_from_tools():
|
||||
pass
|
||||
|
||||
|
||||
def test_no_print_statements_in_codebase():
|
||||
"""Ensure no stray print/sys.stdout writes remain in server source."""
|
||||
offenders = []
|
||||
syntax_errors = []
|
||||
for py_file in SRC.rglob("*.py"):
|
||||
# Skip virtual envs and third-party packages if they exist under SRC
|
||||
parts = set(py_file.parts)
|
||||
if ".venv" in parts or "site-packages" in parts:
|
||||
continue
|
||||
try:
|
||||
text = py_file.read_text(encoding="utf-8", errors="strict")
|
||||
except UnicodeDecodeError:
|
||||
# Be tolerant of encoding edge cases in source tree without silently dropping bytes
|
||||
text = py_file.read_text(encoding="utf-8", errors="replace")
|
||||
try:
|
||||
tree = ast.parse(text, filename=str(py_file))
|
||||
except SyntaxError:
|
||||
syntax_errors.append(py_file.relative_to(SRC))
|
||||
continue
|
||||
|
||||
class StdoutVisitor(ast.NodeVisitor):
|
||||
def __init__(self):
|
||||
self.hit = False
|
||||
|
||||
def visit_Call(self, node: ast.Call):
|
||||
# print(...)
|
||||
if isinstance(node.func, ast.Name) and node.func.id == "print":
|
||||
self.hit = True
|
||||
# sys.stdout.write(...)
|
||||
if isinstance(node.func, ast.Attribute) and node.func.attr == "write":
|
||||
val = node.func.value
|
||||
if isinstance(val, ast.Attribute) and val.attr == "stdout":
|
||||
if isinstance(val.value, ast.Name) and val.value.id == "sys":
|
||||
self.hit = True
|
||||
self.generic_visit(node)
|
||||
|
||||
v = StdoutVisitor()
|
||||
v.visit(tree)
|
||||
if v.hit:
|
||||
offenders.append(py_file.relative_to(SRC))
|
||||
assert not syntax_errors, "syntax errors in: " + ", ".join(str(e) for e in syntax_errors)
|
||||
assert not offenders, "stdout writes found in: " + ", ".join(str(o) for o in offenders)
|
||||
|
|
@ -0,0 +1,126 @@
|
|||
import sys
|
||||
import types
|
||||
from pathlib import Path
|
||||
|
||||
import pytest
|
||||
|
||||
|
||||
|
||||
# Locate server src dynamically to avoid hardcoded layout assumptions (same as other tests)
|
||||
ROOT = Path(__file__).resolve().parents[1]
|
||||
candidates = [
|
||||
ROOT / "UnityMcpBridge" / "UnityMcpServer~" / "src",
|
||||
ROOT / "UnityMcpServer~" / "src",
|
||||
]
|
||||
SRC = next((p for p in candidates if p.exists()), None)
|
||||
if SRC is None:
|
||||
searched = "\n".join(str(p) for p in candidates)
|
||||
pytest.skip(
|
||||
"Unity MCP server source not found. Tried:\n" + searched,
|
||||
allow_module_level=True,
|
||||
)
|
||||
sys.path.insert(0, str(SRC))
|
||||
|
||||
# Stub mcp.server.fastmcp to satisfy imports without full package
|
||||
mcp_pkg = types.ModuleType("mcp")
|
||||
server_pkg = types.ModuleType("mcp.server")
|
||||
fastmcp_pkg = types.ModuleType("mcp.server.fastmcp")
|
||||
class _Dummy: pass
|
||||
fastmcp_pkg.FastMCP = _Dummy
|
||||
fastmcp_pkg.Context = _Dummy
|
||||
server_pkg.fastmcp = fastmcp_pkg
|
||||
mcp_pkg.server = server_pkg
|
||||
sys.modules.setdefault("mcp", mcp_pkg)
|
||||
sys.modules.setdefault("mcp.server", server_pkg)
|
||||
sys.modules.setdefault("mcp.server.fastmcp", fastmcp_pkg)
|
||||
|
||||
|
||||
# Import target module after path injection
|
||||
import tools.manage_script as manage_script # type: ignore
|
||||
|
||||
|
||||
class DummyMCP:
|
||||
def __init__(self):
|
||||
self.tools = {}
|
||||
|
||||
def tool(self, *args, **kwargs): # ignore decorator kwargs like description
|
||||
def _decorator(fn):
|
||||
self.tools[fn.__name__] = fn
|
||||
return fn
|
||||
return _decorator
|
||||
|
||||
|
||||
class DummyCtx: # FastMCP Context placeholder
|
||||
pass
|
||||
|
||||
|
||||
def _register_tools():
|
||||
mcp = DummyMCP()
|
||||
manage_script.register_manage_script_tools(mcp) # populates mcp.tools
|
||||
return mcp.tools
|
||||
|
||||
|
||||
def test_split_uri_unity_path(monkeypatch):
|
||||
tools = _register_tools()
|
||||
captured = {}
|
||||
|
||||
def fake_send(cmd, params): # capture params and return success
|
||||
captured['cmd'] = cmd
|
||||
captured['params'] = params
|
||||
return {"success": True, "message": "ok"}
|
||||
|
||||
monkeypatch.setattr(manage_script, "send_command_with_retry", fake_send)
|
||||
|
||||
fn = tools['apply_text_edits']
|
||||
uri = "unity://path/Assets/Scripts/MyScript.cs"
|
||||
fn(DummyCtx(), uri=uri, edits=[], precondition_sha256=None)
|
||||
|
||||
assert captured['cmd'] == 'manage_script'
|
||||
assert captured['params']['name'] == 'MyScript'
|
||||
assert captured['params']['path'] == 'Assets/Scripts'
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"uri, expected_name, expected_path",
|
||||
[
|
||||
("file:///Users/alex/Project/Assets/Scripts/Foo%20Bar.cs", "Foo Bar", "Assets/Scripts"),
|
||||
("file://localhost/Users/alex/Project/Assets/Hello.cs", "Hello", "Assets"),
|
||||
("file:///C:/Users/Alex/Proj/Assets/Scripts/Hello.cs", "Hello", "Assets/Scripts"),
|
||||
("file:///tmp/Other.cs", "Other", "tmp"), # outside Assets → fall back to normalized dir
|
||||
],
|
||||
)
|
||||
def test_split_uri_file_urls(monkeypatch, uri, expected_name, expected_path):
|
||||
tools = _register_tools()
|
||||
captured = {}
|
||||
|
||||
def fake_send(cmd, params):
|
||||
captured['cmd'] = cmd
|
||||
captured['params'] = params
|
||||
return {"success": True, "message": "ok"}
|
||||
|
||||
monkeypatch.setattr(manage_script, "send_command_with_retry", fake_send)
|
||||
|
||||
fn = tools['apply_text_edits']
|
||||
fn(DummyCtx(), uri=uri, edits=[], precondition_sha256=None)
|
||||
|
||||
assert captured['params']['name'] == expected_name
|
||||
assert captured['params']['path'] == expected_path
|
||||
|
||||
|
||||
def test_split_uri_plain_path(monkeypatch):
|
||||
tools = _register_tools()
|
||||
captured = {}
|
||||
|
||||
def fake_send(cmd, params):
|
||||
captured['params'] = params
|
||||
return {"success": True, "message": "ok"}
|
||||
|
||||
monkeypatch.setattr(manage_script, "send_command_with_retry", fake_send)
|
||||
|
||||
fn = tools['apply_text_edits']
|
||||
fn(DummyCtx(), uri="Assets/Scripts/Thing.cs", edits=[], precondition_sha256=None)
|
||||
|
||||
assert captured['params']['name'] == 'Thing'
|
||||
assert captured['params']['path'] == 'Assets/Scripts'
|
||||
|
||||
|
||||
|
|
@ -0,0 +1,151 @@
|
|||
import sys
|
||||
import pytest
|
||||
import pathlib
|
||||
import importlib.util
|
||||
import types
|
||||
|
||||
|
||||
ROOT = pathlib.Path(__file__).resolve().parents[1]
|
||||
SRC = ROOT / "UnityMcpBridge" / "UnityMcpServer~" / "src"
|
||||
sys.path.insert(0, str(SRC))
|
||||
|
||||
# stub mcp.server.fastmcp
|
||||
mcp_pkg = types.ModuleType("mcp")
|
||||
server_pkg = types.ModuleType("mcp.server")
|
||||
fastmcp_pkg = types.ModuleType("mcp.server.fastmcp")
|
||||
class _D: pass
|
||||
fastmcp_pkg.FastMCP = _D
|
||||
fastmcp_pkg.Context = _D
|
||||
server_pkg.fastmcp = fastmcp_pkg
|
||||
mcp_pkg.server = server_pkg
|
||||
sys.modules.setdefault("mcp", mcp_pkg)
|
||||
sys.modules.setdefault("mcp.server", server_pkg)
|
||||
sys.modules.setdefault("mcp.server.fastmcp", fastmcp_pkg)
|
||||
|
||||
|
||||
def _load(path: pathlib.Path, name: str):
|
||||
spec = importlib.util.spec_from_file_location(name, path)
|
||||
mod = importlib.util.module_from_spec(spec)
|
||||
spec.loader.exec_module(mod)
|
||||
return mod
|
||||
|
||||
|
||||
manage_script_edits = _load(SRC / "tools" / "manage_script_edits.py", "manage_script_edits_mod_guard")
|
||||
|
||||
|
||||
class DummyMCP:
|
||||
def __init__(self): self.tools = {}
|
||||
def tool(self, *args, **kwargs):
|
||||
def deco(fn): self.tools[fn.__name__] = fn; return fn
|
||||
return deco
|
||||
|
||||
|
||||
def setup_tools():
|
||||
mcp = DummyMCP()
|
||||
manage_script_edits.register_manage_script_edits_tools(mcp)
|
||||
return mcp.tools
|
||||
|
||||
|
||||
def test_regex_delete_structural_guard(monkeypatch):
|
||||
tools = setup_tools()
|
||||
apply = tools["script_apply_edits"]
|
||||
|
||||
# Craft a minimal C# snippet with a method; a bad regex that deletes only the header and '{'
|
||||
# will unbalance braces and should be rejected by preflight.
|
||||
bad_pattern = r"(?m)^\s*private\s+void\s+PrintSeries\s*\(\s*\)\s*\{"
|
||||
contents = (
|
||||
"using UnityEngine;\n\n"
|
||||
"public class LongUnityScriptClaudeTest : MonoBehaviour\n{\n"
|
||||
"private void PrintSeries()\n{\n Debug.Log(\"1,2,3\");\n}\n"
|
||||
"}\n"
|
||||
)
|
||||
|
||||
def fake_send(cmd, params):
|
||||
# Only the initial read should be invoked; provide contents
|
||||
if cmd == "manage_script" and params.get("action") == "read":
|
||||
return {"success": True, "data": {"contents": contents}}
|
||||
# If preflight failed as intended, no write should be attempted; return a marker if called
|
||||
return {"success": True, "message": "SHOULD_NOT_WRITE"}
|
||||
|
||||
monkeypatch.setattr(manage_script_edits, "send_command_with_retry", fake_send)
|
||||
|
||||
resp = apply(
|
||||
ctx=None,
|
||||
name="LongUnityScriptClaudeTest",
|
||||
path="Assets/Scripts",
|
||||
edits=[{"op": "regex_replace", "pattern": bad_pattern, "replacement": ""}],
|
||||
options={"validate": "standard"},
|
||||
)
|
||||
|
||||
assert isinstance(resp, dict)
|
||||
assert resp.get("success") is False
|
||||
assert resp.get("code") == "validation_failed"
|
||||
data = resp.get("data", {})
|
||||
assert data.get("status") == "validation_failed"
|
||||
# Helpful hint to prefer structured delete
|
||||
assert "delete_method" in (data.get("hint") or "")
|
||||
|
||||
|
||||
# Parameterized robustness cases
|
||||
BRACE_CONTENT = (
|
||||
"using UnityEngine;\n\n"
|
||||
"public class LongUnityScriptClaudeTest : MonoBehaviour\n{\n"
|
||||
"private void PrintSeries()\n{\n Debug.Log(\"1,2,3\");\n}\n"
|
||||
"}\n"
|
||||
)
|
||||
|
||||
ATTR_CONTENT = (
|
||||
"using UnityEngine;\n\n"
|
||||
"public class LongUnityScriptClaudeTest : MonoBehaviour\n{\n"
|
||||
"[ContextMenu(\"PS\")]\nprivate void PrintSeries()\n{\n Debug.Log(\"1,2,3\");\n}\n"
|
||||
"}\n"
|
||||
)
|
||||
|
||||
EXPR_CONTENT = (
|
||||
"using UnityEngine;\n\n"
|
||||
"public class LongUnityScriptClaudeTest : MonoBehaviour\n{\n"
|
||||
"private void PrintSeries() => Debug.Log(\"1\");\n"
|
||||
"}\n"
|
||||
)
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"contents,pattern,repl,expect_success",
|
||||
[
|
||||
# Unbalanced deletes (should fail with validation_failed)
|
||||
(BRACE_CONTENT, r"(?m)^\s*private\s+void\s+PrintSeries\s*\(\s*\)\s*\{", "", False),
|
||||
# Remove method closing brace only (leaves class closing brace) -> unbalanced
|
||||
(BRACE_CONTENT, r"\n\}\n(?=\s*\})", "\n", False),
|
||||
(ATTR_CONTENT, r"(?m)^\s*private\s+void\s+PrintSeries\s*\(\s*\)\s*\{", "", False),
|
||||
# Expression-bodied: remove only '(' in header -> paren mismatch
|
||||
(EXPR_CONTENT, r"(?m)private\s+void\s+PrintSeries\s*\(", "", False),
|
||||
# Safe changes (should succeed)
|
||||
(BRACE_CONTENT, r"(?m)^\s*Debug\.Log\(.*?\);\s*$", "", True),
|
||||
(EXPR_CONTENT, r"Debug\.Log\(\"1\"\)", "Debug.Log(\"2\")", True),
|
||||
],
|
||||
)
|
||||
def test_regex_delete_variants(monkeypatch, contents, pattern, repl, expect_success):
|
||||
tools = setup_tools()
|
||||
apply = tools["script_apply_edits"]
|
||||
|
||||
def fake_send(cmd, params):
|
||||
if cmd == "manage_script" and params.get("action") == "read":
|
||||
return {"success": True, "data": {"contents": contents}}
|
||||
return {"success": True, "message": "WRITE"}
|
||||
|
||||
monkeypatch.setattr(manage_script_edits, "send_command_with_retry", fake_send)
|
||||
|
||||
resp = apply(
|
||||
ctx=None,
|
||||
name="LongUnityScriptClaudeTest",
|
||||
path="Assets/Scripts",
|
||||
edits=[{"op": "regex_replace", "pattern": pattern, "replacement": repl}],
|
||||
options={"validate": "standard"},
|
||||
)
|
||||
|
||||
if expect_success:
|
||||
assert isinstance(resp, dict) and resp.get("success") is True
|
||||
else:
|
||||
assert isinstance(resp, dict) and resp.get("success") is False and resp.get("code") == "validation_failed"
|
||||
|
||||
|
||||
|
|
@ -0,0 +1,81 @@
|
|||
import pytest
|
||||
|
||||
|
||||
import sys
|
||||
from pathlib import Path
|
||||
import pytest
|
||||
import types
|
||||
|
||||
# locate server src dynamically to avoid hardcoded layout assumptions
|
||||
ROOT = Path(__file__).resolve().parents[1]
|
||||
candidates = [
|
||||
ROOT / "UnityMcpBridge" / "UnityMcpServer~" / "src",
|
||||
ROOT / "UnityMcpServer~" / "src",
|
||||
]
|
||||
SRC = next((p for p in candidates if p.exists()), None)
|
||||
if SRC is None:
|
||||
searched = "\n".join(str(p) for p in candidates)
|
||||
pytest.skip(
|
||||
"Unity MCP server source not found. Tried:\n" + searched,
|
||||
allow_module_level=True,
|
||||
)
|
||||
sys.path.insert(0, str(SRC))
|
||||
|
||||
from tools.resource_tools import register_resource_tools # type: ignore
|
||||
|
||||
class DummyMCP:
|
||||
def __init__(self):
|
||||
self._tools = {}
|
||||
def tool(self, *args, **kwargs): # accept kwargs like description
|
||||
def deco(fn):
|
||||
self._tools[fn.__name__] = fn
|
||||
return fn
|
||||
return deco
|
||||
|
||||
@pytest.fixture()
|
||||
def resource_tools():
|
||||
mcp = DummyMCP()
|
||||
register_resource_tools(mcp)
|
||||
return mcp._tools
|
||||
|
||||
|
||||
def test_resource_list_filters_and_rejects_traversal(resource_tools, tmp_path, monkeypatch):
|
||||
# Create fake project structure
|
||||
proj = tmp_path
|
||||
assets = proj / "Assets" / "Scripts"
|
||||
assets.mkdir(parents=True)
|
||||
(assets / "A.cs").write_text("// a", encoding="utf-8")
|
||||
(assets / "B.txt").write_text("b", encoding="utf-8")
|
||||
outside = tmp_path / "Outside.cs"
|
||||
outside.write_text("// outside", encoding="utf-8")
|
||||
# Symlink attempting to escape
|
||||
sneaky_link = assets / "link_out"
|
||||
try:
|
||||
sneaky_link.symlink_to(outside)
|
||||
except Exception:
|
||||
# Some platforms may not allow symlinks in tests; ignore
|
||||
pass
|
||||
|
||||
list_resources = resource_tools["list_resources"]
|
||||
# Only .cs under Assets should be listed
|
||||
import asyncio
|
||||
resp = asyncio.get_event_loop().run_until_complete(
|
||||
list_resources(ctx=None, pattern="*.cs", under="Assets", limit=50, project_root=str(proj))
|
||||
)
|
||||
assert resp["success"] is True
|
||||
uris = resp["data"]["uris"]
|
||||
assert any(u.endswith("Assets/Scripts/A.cs") for u in uris)
|
||||
assert not any(u.endswith("B.txt") for u in uris)
|
||||
assert not any(u.endswith("Outside.cs") for u in uris)
|
||||
|
||||
|
||||
def test_resource_list_rejects_outside_paths(resource_tools, tmp_path):
|
||||
proj = tmp_path
|
||||
# under points outside Assets
|
||||
list_resources = resource_tools["list_resources"]
|
||||
import asyncio
|
||||
resp = asyncio.get_event_loop().run_until_complete(
|
||||
list_resources(ctx=None, pattern="*.cs", under="..", limit=10, project_root=str(proj))
|
||||
)
|
||||
assert resp["success"] is False
|
||||
assert "Assets" in resp.get("error", "") or "under project root" in resp.get("error", "")
|
||||
|
|
@ -0,0 +1,36 @@
|
|||
import pytest
|
||||
|
||||
|
||||
@pytest.mark.xfail(strict=False, reason="pending: create new script, validate, apply edits, build and compile scene")
|
||||
def test_script_edit_happy_path():
|
||||
pass
|
||||
|
||||
|
||||
@pytest.mark.xfail(strict=False, reason="pending: multiple micro-edits debounce to single compilation")
|
||||
def test_micro_edits_debounce():
|
||||
pass
|
||||
|
||||
|
||||
@pytest.mark.xfail(strict=False, reason="pending: line ending variations handled correctly")
|
||||
def test_line_endings_and_columns():
|
||||
pass
|
||||
|
||||
|
||||
@pytest.mark.xfail(strict=False, reason="pending: regex_replace no-op with allow_noop honored")
|
||||
def test_regex_replace_noop_allowed():
|
||||
pass
|
||||
|
||||
|
||||
@pytest.mark.xfail(strict=False, reason="pending: large edit size boundaries and overflow protection")
|
||||
def test_large_edit_size_and_overflow():
|
||||
pass
|
||||
|
||||
|
||||
@pytest.mark.xfail(strict=False, reason="pending: symlink and junction protections on edits")
|
||||
def test_symlink_and_junction_protection():
|
||||
pass
|
||||
|
||||
|
||||
@pytest.mark.xfail(strict=False, reason="pending: atomic write guarantees")
|
||||
def test_atomic_write_guarantees():
|
||||
pass
|
||||
|
|
@ -0,0 +1,159 @@
|
|||
import sys
|
||||
import pathlib
|
||||
import importlib.util
|
||||
import types
|
||||
import pytest
|
||||
import asyncio
|
||||
|
||||
# add server src to path and load modules without triggering package imports
|
||||
ROOT = pathlib.Path(__file__).resolve().parents[1]
|
||||
SRC = ROOT / "UnityMcpBridge" / "UnityMcpServer~" / "src"
|
||||
sys.path.insert(0, str(SRC))
|
||||
|
||||
# stub mcp.server.fastmcp to satisfy imports without full dependency
|
||||
mcp_pkg = types.ModuleType("mcp")
|
||||
server_pkg = types.ModuleType("mcp.server")
|
||||
fastmcp_pkg = types.ModuleType("mcp.server.fastmcp")
|
||||
|
||||
class _Dummy:
|
||||
pass
|
||||
|
||||
fastmcp_pkg.FastMCP = _Dummy
|
||||
fastmcp_pkg.Context = _Dummy
|
||||
server_pkg.fastmcp = fastmcp_pkg
|
||||
mcp_pkg.server = server_pkg
|
||||
sys.modules.setdefault("mcp", mcp_pkg)
|
||||
sys.modules.setdefault("mcp.server", server_pkg)
|
||||
sys.modules.setdefault("mcp.server.fastmcp", fastmcp_pkg)
|
||||
|
||||
def load_module(path, name):
|
||||
spec = importlib.util.spec_from_file_location(name, path)
|
||||
module = importlib.util.module_from_spec(spec)
|
||||
spec.loader.exec_module(module)
|
||||
return module
|
||||
|
||||
manage_script_module = load_module(SRC / "tools" / "manage_script.py", "manage_script_module")
|
||||
manage_asset_module = load_module(SRC / "tools" / "manage_asset.py", "manage_asset_module")
|
||||
|
||||
|
||||
class DummyMCP:
|
||||
def __init__(self):
|
||||
self.tools = {}
|
||||
|
||||
def tool(self, *args, **kwargs): # accept decorator kwargs like description
|
||||
def decorator(func):
|
||||
self.tools[func.__name__] = func
|
||||
return func
|
||||
return decorator
|
||||
|
||||
def setup_manage_script():
|
||||
mcp = DummyMCP()
|
||||
manage_script_module.register_manage_script_tools(mcp)
|
||||
return mcp.tools
|
||||
|
||||
def setup_manage_asset():
|
||||
mcp = DummyMCP()
|
||||
manage_asset_module.register_manage_asset_tools(mcp)
|
||||
return mcp.tools
|
||||
|
||||
def test_apply_text_edits_long_file(monkeypatch):
|
||||
tools = setup_manage_script()
|
||||
apply_edits = tools["apply_text_edits"]
|
||||
captured = {}
|
||||
|
||||
def fake_send(cmd, params):
|
||||
captured["cmd"] = cmd
|
||||
captured["params"] = params
|
||||
return {"success": True}
|
||||
|
||||
monkeypatch.setattr(manage_script_module, "send_command_with_retry", fake_send)
|
||||
|
||||
edit = {"startLine": 1005, "startCol": 0, "endLine": 1005, "endCol": 5, "newText": "Hello"}
|
||||
resp = apply_edits(None, "unity://path/Assets/Scripts/LongFile.cs", [edit])
|
||||
assert captured["cmd"] == "manage_script"
|
||||
assert captured["params"]["action"] == "apply_text_edits"
|
||||
assert captured["params"]["edits"][0]["startLine"] == 1005
|
||||
assert resp["success"] is True
|
||||
|
||||
def test_sequential_edits_use_precondition(monkeypatch):
|
||||
tools = setup_manage_script()
|
||||
apply_edits = tools["apply_text_edits"]
|
||||
calls = []
|
||||
|
||||
def fake_send(cmd, params):
|
||||
calls.append(params)
|
||||
return {"success": True, "sha256": f"hash{len(calls)}"}
|
||||
|
||||
monkeypatch.setattr(manage_script_module, "send_command_with_retry", fake_send)
|
||||
|
||||
edit1 = {"startLine": 1, "startCol": 0, "endLine": 1, "endCol": 0, "newText": "//header\n"}
|
||||
resp1 = apply_edits(None, "unity://path/Assets/Scripts/File.cs", [edit1])
|
||||
edit2 = {"startLine": 2, "startCol": 0, "endLine": 2, "endCol": 0, "newText": "//second\n"}
|
||||
resp2 = apply_edits(None, "unity://path/Assets/Scripts/File.cs", [edit2], precondition_sha256=resp1["sha256"])
|
||||
|
||||
assert calls[1]["precondition_sha256"] == resp1["sha256"]
|
||||
assert resp2["sha256"] == "hash2"
|
||||
|
||||
|
||||
def test_apply_text_edits_forwards_options(monkeypatch):
|
||||
tools = setup_manage_script()
|
||||
apply_edits = tools["apply_text_edits"]
|
||||
captured = {}
|
||||
|
||||
def fake_send(cmd, params):
|
||||
captured["params"] = params
|
||||
return {"success": True}
|
||||
|
||||
monkeypatch.setattr(manage_script_module, "send_command_with_retry", fake_send)
|
||||
|
||||
opts = {"validate": "relaxed", "applyMode": "atomic", "refresh": "immediate"}
|
||||
apply_edits(None, "unity://path/Assets/Scripts/File.cs", [{"startLine":1,"startCol":1,"endLine":1,"endCol":1,"newText":"x"}], options=opts)
|
||||
assert captured["params"].get("options") == opts
|
||||
|
||||
|
||||
def test_apply_text_edits_defaults_atomic_for_multi_span(monkeypatch):
|
||||
tools = setup_manage_script()
|
||||
apply_edits = tools["apply_text_edits"]
|
||||
captured = {}
|
||||
|
||||
def fake_send(cmd, params):
|
||||
captured["params"] = params
|
||||
return {"success": True}
|
||||
|
||||
monkeypatch.setattr(manage_script_module, "send_command_with_retry", fake_send)
|
||||
|
||||
edits = [
|
||||
{"startLine": 2, "startCol": 2, "endLine": 2, "endCol": 3, "newText": "A"},
|
||||
{"startLine": 3, "startCol": 2, "endLine": 3, "endCol": 2, "newText": "// tail\n"},
|
||||
]
|
||||
apply_edits(None, "unity://path/Assets/Scripts/File.cs", edits, precondition_sha256="x")
|
||||
opts = captured["params"].get("options", {})
|
||||
assert opts.get("applyMode") == "atomic"
|
||||
|
||||
def test_manage_asset_prefab_modify_request(monkeypatch):
|
||||
tools = setup_manage_asset()
|
||||
manage_asset = tools["manage_asset"]
|
||||
captured = {}
|
||||
|
||||
async def fake_async(cmd, params, loop=None):
|
||||
captured["cmd"] = cmd
|
||||
captured["params"] = params
|
||||
return {"success": True}
|
||||
|
||||
monkeypatch.setattr(manage_asset_module, "async_send_command_with_retry", fake_async)
|
||||
monkeypatch.setattr(manage_asset_module, "get_unity_connection", lambda: object())
|
||||
|
||||
async def run():
|
||||
resp = await manage_asset(
|
||||
None,
|
||||
action="modify",
|
||||
path="Assets/Prefabs/Player.prefab",
|
||||
properties={"hp": 100},
|
||||
)
|
||||
assert captured["cmd"] == "manage_asset"
|
||||
assert captured["params"]["action"] == "modify"
|
||||
assert captured["params"]["path"] == "Assets/Prefabs/Player.prefab"
|
||||
assert captured["params"]["properties"] == {"hp": 100}
|
||||
assert resp["success"] is True
|
||||
|
||||
asyncio.run(run())
|
||||
|
|
@ -0,0 +1,218 @@
|
|||
import sys
|
||||
import json
|
||||
import struct
|
||||
import socket
|
||||
import threading
|
||||
import time
|
||||
import select
|
||||
from pathlib import Path
|
||||
|
||||
import pytest
|
||||
|
||||
# locate server src dynamically to avoid hardcoded layout assumptions
|
||||
ROOT = Path(__file__).resolve().parents[1]
|
||||
candidates = [
|
||||
ROOT / "UnityMcpBridge" / "UnityMcpServer~" / "src",
|
||||
ROOT / "UnityMcpServer~" / "src",
|
||||
]
|
||||
SRC = next((p for p in candidates if p.exists()), None)
|
||||
if SRC is None:
|
||||
searched = "\n".join(str(p) for p in candidates)
|
||||
pytest.skip(
|
||||
"Unity MCP server source not found. Tried:\n" + searched,
|
||||
allow_module_level=True,
|
||||
)
|
||||
sys.path.insert(0, str(SRC))
|
||||
|
||||
from unity_connection import UnityConnection
|
||||
|
||||
|
||||
def start_dummy_server(greeting: bytes, respond_ping: bool = False):
|
||||
"""Start a minimal TCP server for handshake tests."""
|
||||
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
|
||||
sock.bind(("127.0.0.1", 0))
|
||||
sock.listen(1)
|
||||
port = sock.getsockname()[1]
|
||||
ready = threading.Event()
|
||||
|
||||
def _run():
|
||||
ready.set()
|
||||
conn, _ = sock.accept()
|
||||
conn.settimeout(1.0)
|
||||
if greeting:
|
||||
conn.sendall(greeting)
|
||||
if respond_ping:
|
||||
try:
|
||||
# Read exactly n bytes helper
|
||||
def _read_exact(n: int) -> bytes:
|
||||
buf = b""
|
||||
while len(buf) < n:
|
||||
chunk = conn.recv(n - len(buf))
|
||||
if not chunk:
|
||||
break
|
||||
buf += chunk
|
||||
return buf
|
||||
|
||||
header = _read_exact(8)
|
||||
if len(header) == 8:
|
||||
length = struct.unpack(">Q", header)[0]
|
||||
payload = _read_exact(length)
|
||||
if payload == b'{"type":"ping"}':
|
||||
resp = b'{"type":"pong"}'
|
||||
conn.sendall(struct.pack(">Q", len(resp)) + resp)
|
||||
except Exception:
|
||||
pass
|
||||
time.sleep(0.1)
|
||||
try:
|
||||
conn.close()
|
||||
except Exception:
|
||||
pass
|
||||
finally:
|
||||
sock.close()
|
||||
|
||||
threading.Thread(target=_run, daemon=True).start()
|
||||
ready.wait()
|
||||
return port
|
||||
|
||||
|
||||
def start_handshake_enforcing_server():
|
||||
"""Server that drops connection if client sends data before handshake."""
|
||||
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
|
||||
sock.bind(("127.0.0.1", 0))
|
||||
sock.listen(1)
|
||||
port = sock.getsockname()[1]
|
||||
ready = threading.Event()
|
||||
|
||||
def _run():
|
||||
ready.set()
|
||||
conn, _ = sock.accept()
|
||||
# If client sends any data before greeting, disconnect (poll briefly)
|
||||
try:
|
||||
conn.setblocking(False)
|
||||
deadline = time.time() + 0.15 # short, reduces race with legitimate clients
|
||||
while time.time() < deadline:
|
||||
r, _, _ = select.select([conn], [], [], 0.01)
|
||||
if r:
|
||||
try:
|
||||
peek = conn.recv(1, socket.MSG_PEEK)
|
||||
except BlockingIOError:
|
||||
peek = b""
|
||||
except Exception:
|
||||
peek = b"\x00"
|
||||
if peek:
|
||||
conn.close()
|
||||
sock.close()
|
||||
return
|
||||
# No pre-handshake data observed; send greeting
|
||||
conn.setblocking(True)
|
||||
conn.sendall(b"MCP/0.1 FRAMING=1\n")
|
||||
time.sleep(0.1)
|
||||
finally:
|
||||
try:
|
||||
conn.close()
|
||||
finally:
|
||||
sock.close()
|
||||
|
||||
threading.Thread(target=_run, daemon=True).start()
|
||||
ready.wait()
|
||||
return port
|
||||
|
||||
|
||||
def test_handshake_requires_framing():
|
||||
port = start_dummy_server(b"MCP/0.1\n")
|
||||
conn = UnityConnection(host="127.0.0.1", port=port)
|
||||
assert conn.connect() is False
|
||||
assert conn.sock is None
|
||||
|
||||
|
||||
def test_small_frame_ping_pong():
|
||||
port = start_dummy_server(b"MCP/0.1 FRAMING=1\n", respond_ping=True)
|
||||
conn = UnityConnection(host="127.0.0.1", port=port)
|
||||
try:
|
||||
assert conn.connect() is True
|
||||
assert conn.use_framing is True
|
||||
payload = b'{"type":"ping"}'
|
||||
conn.sock.sendall(struct.pack(">Q", len(payload)) + payload)
|
||||
resp = conn.receive_full_response(conn.sock)
|
||||
assert json.loads(resp.decode("utf-8"))["type"] == "pong"
|
||||
finally:
|
||||
conn.disconnect()
|
||||
|
||||
|
||||
def test_unframed_data_disconnect():
|
||||
port = start_handshake_enforcing_server()
|
||||
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
|
||||
sock.connect(("127.0.0.1", port))
|
||||
sock.settimeout(1.0)
|
||||
sock.sendall(b"BAD")
|
||||
time.sleep(0.4)
|
||||
try:
|
||||
data = sock.recv(1024)
|
||||
assert data == b""
|
||||
except (ConnectionResetError, ConnectionAbortedError):
|
||||
# Some platforms raise instead of returning empty bytes when the
|
||||
# server closes the connection after detecting pre-handshake data.
|
||||
pass
|
||||
finally:
|
||||
sock.close()
|
||||
|
||||
|
||||
def test_zero_length_payload_heartbeat():
|
||||
# Server that sends handshake and a zero-length heartbeat frame followed by a pong payload
|
||||
import socket, struct, threading, time
|
||||
|
||||
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
|
||||
sock.bind(("127.0.0.1", 0))
|
||||
sock.listen(1)
|
||||
port = sock.getsockname()[1]
|
||||
ready = threading.Event()
|
||||
|
||||
def _run():
|
||||
ready.set()
|
||||
conn, _ = sock.accept()
|
||||
try:
|
||||
conn.sendall(b"MCP/0.1 FRAMING=1\n")
|
||||
time.sleep(0.02)
|
||||
# Heartbeat frame (length=0)
|
||||
conn.sendall(struct.pack(">Q", 0))
|
||||
time.sleep(0.02)
|
||||
# Real payload frame
|
||||
payload = b'{"type":"pong"}'
|
||||
conn.sendall(struct.pack(">Q", len(payload)) + payload)
|
||||
time.sleep(0.02)
|
||||
finally:
|
||||
try: conn.close()
|
||||
except Exception: pass
|
||||
sock.close()
|
||||
|
||||
threading.Thread(target=_run, daemon=True).start()
|
||||
ready.wait()
|
||||
|
||||
conn = UnityConnection(host="127.0.0.1", port=port)
|
||||
try:
|
||||
assert conn.connect() is True
|
||||
# Receive should skip heartbeat and return the pong payload (or empty if only heartbeats seen)
|
||||
resp = conn.receive_full_response(conn.sock)
|
||||
assert resp in (b'{"type":"pong"}', b"")
|
||||
finally:
|
||||
conn.disconnect()
|
||||
|
||||
|
||||
@pytest.mark.skip(reason="TODO: oversized payload should disconnect")
|
||||
def test_oversized_payload_rejected():
|
||||
pass
|
||||
|
||||
|
||||
@pytest.mark.skip(reason="TODO: partial header/payload triggers timeout and disconnect")
|
||||
def test_partial_frame_timeout():
|
||||
pass
|
||||
|
||||
|
||||
@pytest.mark.skip(reason="TODO: concurrency test with parallel tool invocations")
|
||||
def test_parallel_invocations_no_interleaving():
|
||||
pass
|
||||
|
||||
|
||||
@pytest.mark.skip(reason="TODO: reconnection after drop mid-command")
|
||||
def test_reconnect_mid_command():
|
||||
pass
|
||||
Loading…
Reference in New Issue