AI Integration
AI assistance in Shinro Studio is not just a chatbot — it is a traversal layer across the five modes, designed for agentic engineering on top of a physical AI system. The mechanism is the Model Context Protocol (MCP): every module descriptor is exposed through MCP, so an AI agent reasoning about your robot is reasoning about the same source of truth the kernel uses.
At each mode
- At Blueprint — the assistant suggests hardware compositions and reads vendor datasheets to scaffold hardware modules.
- At Flow — the assistant scaffolds behavior trees from natural-language descriptions.
- At Code — the assistant authors and reviews module code.
- At Simulator — the assistant configures simulation scenarios and interprets failures.
- At Deployment — the assistant troubleshoots deployment errors with full visibility into the module graph and the failure point.
Descriptors as AI context
Each module descriptor is more than a build spec. It is the contract the AI uses to understand what is happening in the system. A perception module’s descriptor tells the agent which sensors it reads, which capabilities it provides, which other modules it composes with, and what kind of AI it expects. The agent doesn’t have to infer the system’s state from logs or screenshots — it queries the descriptor graph through MCP and gets a precise, current answer. This is what differentiates AI-native development from AI-assisted development: the AI is not a passenger guessing at the system, it is a peer reasoning about it.
Bring your own model
Shinro Studio does not lock you to a specific AI provider. Each module that uses AI declares what kind of AI it needs — VLM, SLM, LLM, reasoning vs. retrieval — and the user routes that requirement to a provider of their choice: self-hosted Ollama, the Anthropic Claude API, OpenAI Codex, or another.
Per-module routing means a fleet can run on-device VLMs for perception while the Deployment Manager calls out to a hosted LLM for troubleshooting. The provider choice is a deployment decision, not a framework constraint.
Each module that needs AI declares its requirement in its descriptor — for example, a perception module declares requires.ai = "vlm" and Shinro Studio routes that requirement to the user’s configured provider. The exact descriptor field is finalized in the canonical schema.
Specifically supported patterns
- VLM on-device — perception modules running a vision-language AI model on the robot itself.
- VLM on-simulator — synthetic data and policy work in the Simulator mode.
- LLM via self-hosted (Ollama, others) or hosted API (Claude, Codex, others) for code authorship, flow design, and deployment troubleshooting.
- MCP-mediated agentic engineering for descriptor-aware tasks across the five modes.