#
References
#
FAQ & Troubleshooting
1. I want to ensure that no agent uses GPT-4 due to cost concerns. Can I enforce this at the platform level?
Yes, as a tenant admin, you can disable GPT-4 for all workspaces, ensuring agents can’t access it during creation or benchmarking.
2. Our compliance team wants to audit how an agent responded to a specific customer query. Is that possible?
Yes. Under the Governance module, navigate to the particular agent you want to check this for. Inside the Agent monitoring dashboard, navigate to the Messages tab, and search for that query. You can now view the exact input/output payload. This includes the input and the agent output along with the traces, providing full transparency for audit purposes.
3. We changed the prompt and model of an agent and want to verify that it didn’t degrade performance. How can governance help?
Use Benchmarking + Governance Transaction Logs. Compare pre-change and post-change agent transactions using version-level comparisons and benchmark metrics to verify accuracy, latency, and cost impact.
4. I need to monitor how many tokens are being consumed per agent per week. Where can I find that?
Under Agent Monitoring, you can view token usage trends, cost-per-agent, and even drill down to per-transaction token counts. Use these views to track consumption and optimize model configurations.
5. Can we restrict certain agents/Knowledge gardens from being visible or modifiable by specific teams?
Yes, as the creator of the Agent/Knowledge, you can select the agent visibility to private via the Expert Agent Studio to restrict the visibility of agents to only yourself. This helps enforce privacy for sensitive agents. You can also set the visibility of a particular Knowledge garden to private via the Knowledge Garden Module to restrict the visibility to only yourself. This helps enforce privacy for sensitive documents.
6. I saw an unexpected spike in model usage yesterday. How can I find out what happened?
Navigate to the Governance Module, and you can sort the transactions to view the agents having a high number of transactions. To check the model usage for a specific agent, you can click on the Agent, and the Agent Monitoring page will open, which will provide you with an overview of the model usage as well as the transaction-level logs.
7. How does the platform handle PII detection and anonymization, and what flexibility does it offer?
PII detection and anonymization are facilitated through Custom Tools that integrate into the agentic process flows. Builders can integrate various third-party or proprietary PII services (like Nemo) and configure them specifically for different use cases or assign them directly to AI agents, providing flexible and custom control over sensitive data
8. What measures are in place for data encryption or obfuscation at transit and rest?
Data encryption is implemented across the network, infrastructure, database, and storage levels. All data is securely encrypted in transit (when moving between systems) as well as while at rest (when stored), ensuring comprehensive data security across the platform.
9. What data loss controls and recovery mechanisms are implemented to ensure data integrity and business continuity?
The platform implements robust backup, recovery, and redundancy mechanisms at the infrastructure level. These controls ensure high availability and data integrity, protecting against data loss and ensuring business continuity even in the event of system failures.
10. How does the platform ensure Intellectual Property (IP) protection during content generation?
IP detection and protection is facilitated through Custom Tools. Builders can integrate third-party or proprietary IP services within agentic process flows or assign them directly to AI agents. This capability ensures content generation is responsible and compliant with Intellectual Property standards, preventing the misuse of proprietary or copyrighted material.
11. How does the platform ensure bias and fairness detection and mitigation for agents?
Bias and fairness are managed through the benchmarking framework. Users can define, test, and validate agents against specific bias and fairness criteria using LLM-as-a-judge evaluations. This process ensures responsible model behavior and alignment with organizational ethics and compliance standards.
12. How does the platform prevent the generation of unwanted or illegal content (e.g., copyrighted materials)?
The platform enables filtering of unwanted or illegal content through configurable Custom Tools. Builders can integrate various third-party or proprietary content filtering services within agentic process flows or assign them directly to AI agents. This ensures responsible generation that aligns with organizational and legal compliance standards.
13. Does the platform support Single Sign-On (SSO) integration for enterprise users?
Yes, the platform supports enterprise-grade Single Sign-On (SSO) integration. This feature is fully functional and already lives in client implementations, providing a secure and streamlined authentication experience aligned with enterprise security policies.
14. How can users provide feedback to improve agent quality continuously? The platform offers embedded user controls to facilitate continuous quality feedback for conversation agents. This includes a simple like/dislike option that users can leverage to provide immediate commentary on the accuracy and quality of the answers received. This user feedback loop is a critical mechanism for continuous agent refinement and quality improvement for agent builders.
15. How can users provide feedback to continuously improve agent quality?
The platform, via the EnterpriseGPT (Conversational AI) interface, includes embedded user controls to facilitate continuous quality feedback. This includes a simple like/dislike option that users can leverage to provide immediate commentary on the accuracy and quality of the answers received. This user feedback loop is a critical mechanism for continuous agent refinement and quality improvement.
16. How are final outputs, records, and activities governed and made auditable?
All final outputs, records, and activities are designed for enterprise control and auditability. The platform ensures that final outputs are downloadable (e.g., for reporting or archival) and shareable within the strict confines of governance. This download includes detailed governance metrics, such as cost tracking and token usage (visible via the LLM Optimization Hub), ensuring every single transaction is accounted for and fully auditable for compliance purposes.
17. How does the platform provide explainability and auditability for AI agent decisions and data lineage?
The TRACES feature provides complete AI agent and multi-agent explainability. This includes:
Thought & Decision Patterns: Visibility into the agent's thought process, decision patterns, and tool use
Data Source Lineage: Complete data lineage for unstructured data (to the chunk level in the EKG) and for structured data (query level and received data from databases/enterprise applications).
All historic transactions, complete with TRACES, can be viewed through the Governance Screen for comprehensive auditing.
18. What deployment options are supported for the platform across different regions?
The platform supports three primary deployment models:
Cloud-Based (PaaS / VPC): The platform can be deployed on the PF Cloud (PaaS) or securely within a client's own Virtual Private Cloud (VPC), providing flexibility for cloud-native adoption and strict network controls.
Hybrid: The system supports a Hybrid architecture where the platform is hosted in the cloud/VPC but maintains secure connectivity to client data residing on-premises (on-prem).
Edge-Hosted Frontends: The architecture is also designed to accommodate edge-hosted frontends for client applications, ensuring low-latency access and optimal user experience where required.
#
Limitations
- Agent Visibility & Sharing Restrictions - The visibility setting is binary (Private or Public within a workspace). This simple model may not support more nuanced sharing levels, such as sharing with select user groups, read-only access to certain agents, or temporary access.
- Customizable PII guidelines - Custom PII is not yet supported by the platform
- Absence of custom guidelines for Toxicity detection and prompt injection
- Scope Limited to Agent Interactions - Toxicity detection may primarily monitor agent conversations and interactions, but may not extend to other platform components such as document uploads, knowledge gardens, or external tool integrations.