AI Agents
AI Agents in Synthesise are autonomous modules trained to complete specific strategic, operational, or creative functions in the product development lifecycle. While chatbots focus on reactive conversation, agents proactively generate, analyze, and optimize.
These agents operate within flows, execute independently or as part of teams, and communicate via internal protocols.
1. Agent Architecture
Each agent is made up of:
Purpose β the specific outcome itβs optimized for (e.g., SEO, pricing, UX)
Model β the underlying LLM or ruleset (GPT-4, Claude, or fine-tuned local models)
Memory β local session context plus shared product knowledge
Interface β a unified
Agent
trait implemented across all agents
2. Available Agents
TutorAgent
Generates educational outlines, assessments, and learning flows
SEOAgent
Produces search-optimized titles, meta tags, alt text
UXAuditAgent
Evaluates module structure and friction points
MonetizationAgent
Recommends pricing tiers, upsell logic, and access paths
Example:
3. Agent Lifecycle
Invocation Triggered by user flow (e.g. creating a new module) or other agents
Prompt Assembly Combines charter + user context + input for LLM inference
Execution Outputs are streamed or completed depending on use case
Logging & Feedback Outputs are saved to session memory and rated for accuracy
Replay/Mutation Outputs can be forked or regenerated with new parameters
4. Multi-Agent Collaboration
Agents can interact in swarm-style flows using a shared product state. For example:
This enables compound outputs: SEOAgent might propose the headline, MonetizationAgent adjusts pricing based on that.
5. Custom Agent Creation (Pro Feature)
Pro users will be able to define their own agents with:
Input/output schema templates
Prompt libraries
Optional fine-tuned model integration
Early versions are compiled to WASM and run inside secure sandboxes.
Last updated