System Card: Celonis Insight Explorer
January 15 , 2026
System Name: Celonis Insight Explorer
System Version: GA
1. System Overview
Description: Celonis Insight Explorer is a Studio Asset within the Celonis Execution Management System (EMS). It connects to an organization's Knowledge Model to automatically discover and surface potentially significant observations ("insights") from user-selected metrics. It facilitates further analysis by auto-generating Celonis Views for each insight.
AI Integration: Insight Explorer employs a multi-layered AI approach:
Core Insights: Utilizes the "Recommended Insights" AI platform capability. This backend system uses intelligent algorithms and statistical techniques (e.g., subgroup discovery, time series analysis) to identify attribute-based, process-based, and trend-based insights. The core insight generation capabilities can operate without GenAI, but activating GenAI features (see below) improves performance and experience significantly.
GeAI features: Multiple features leverage Large Language Model (LLM) capabilities. Those are used to generate metrics, improve the insight discovery, or generate explanations.
User Interaction: Users (Analysts, COE Leads, Value Engineers) configure an Insight Explorer asset by selecting metrics (and optionally attributes, filters, event logs) from their Knowledge Model. The system presents a list of AI-generated metrics or insights.
Users can review recommended metrics and insights and choose to proceed further with them.
Users can click on an insight to open an auto-generated, contextualized View for deeper validation and analysis. These Views can be exported.
The GenAI features can be enabled/disabled in the advanced settings of the asset.
2. AI Model Dependency
Model Name & Version:
Core Insights: This platform capability uses a collection of intelligent algorithms and statistical methods. Specific underlying algorithmic versions are managed by Celonis.
Large Language Models (LLMs): Integrated LLM capabilities are used for recommending metrics, insights, and generating insight explanations.
By default it uses these models: azure-openai-gpt-4-1, aws-claude-sonnet-4. Other models (e.g. BYOM) can be configured by administrators.
Impact of Model Output:
Core Insights Output: Generates ranked "insights" identifying potential areas of process improvement, significant trends, or correlations. This guides users' analytical focus. It also auto-generates Celonis Views for validation.
LLM-Generated Output: Produces metrics (including names, descriptions, and PQL expressions), insights, or insight explanations.
3. Data Flow
Data Inputs:
User Configuration: Selected metrics, attributes (optional), event log (optional), and PQL filters (optional) from a Celonis Knowledge Model. User preference to enable/disable GenAI settings.
Knowledge Model: Definitions, metadata (e.g., KPI desired direction, formats), and relationships.
Data Model: The underlying case-centric or object-centric process data.
Generated Insights (as input to LLM): The insights produced by the "Recommended Insights" capability serve as input for the LLM explanation functionality.
Data Processing:
Core Insights AI:
Knowledge Readiness checks (validating metric/attribute compatibility).
Intelligent Attribute Recommendation.
Insight Generation algorithms identify patterns and rank insights by potential impact.
LLM Processing (if enabled):
Analyze event log for process inefficiencies expressed as recommended metrics.
Insight attributes are recommended based on semantics of LLM.
Generated insight details (e.g., filters, metric context) are processed using integrated LLM capabilities.
Data Outputs:
A list of recommended metrics.
A list of ranked insights, presented as interactive cards. Titles may be LLM-generated.
LLM-generated explanations for filter names and values (e.g., as tooltips or collapsible sections), available in the configured language.
Auto-generated Celonis Views for each selected insight, aiding validation.
4. Human Oversight & Control
Level of Automation: Human-in-the-loop / Augmented Intelligence. The system automates insight discovery and, optionally, explanation generation, but human expertise is indispensable.
Human Intervention Points:
Configuration: Users select Knowledge Model, metrics, attributes, filters. Users enable/disable GenAI features.
Evaluation of Insights & Explanations: Users must critically evaluate the relevance, validity, and accuracy of each generated metrics, insights and their LLM-generated explanation. The system "cannot produce relevant insights with 100% precision, nor can it distinguish correlation from causation." LLMs also "can produce unexpected or irrelevant answers." Subject matter expertise is paramount.
Prioritization: Users "bookmark" insights and can "dismiss" others.
Validation in Views: Users interact with auto-generated Views for detailed validation.
(Future) Feedback: Users may be able to provide feedback (e.g., like/dislike) on LLM-generated explanations.
Decision to Act: Users determine if an insight represents a true business opportunity.
Monitoring & Evaluation:
Users directly evaluate insights and explanations. Future feedback mechanisms will contribute to the evaluation of LLM outputs.
Celonis may track aggregated usage metrics.
If underlying data/knowledge changes and insights are recomputed, LLM explanations are also recomputed.
5. Safety & Security
Data Security: Operates within the Celonis EMS security framework, respecting user permissions (Edit Package, Use Data Model). Insights and explanations are generated based on data the user is authorized to access. Standard Celonis data encryption and access controls apply.
AI Kill Switch: An AI kill switch, if enabled at a higher level, would prevent users from enabling LLM explanations.
Data Privacy (for LLM features): Users must agree to the Celonis AI Addendum to use LLM-assisted features. Procedures for the deletion of LLM-generated data (e.g., upon AI kill switch activation or customer request) are important considerations.
System Reliability: Dependent on the Celonis EMS platform, "Recommended Insights" services, and the availability of the integrated LLM capabilities.
6. Ethical Considerations
Fairness & Non-Discrimination:
Core Insights: Risk of misleading insights if data/configuration is biased or if users misinterpret correlation. Human evaluation is key.
GenAI features: LLMs could potentially generate inaccurate or subtly biased explanations, though the primary aim is to clarify technical terms. Minimized through careful prompting, but risk of "unexpected or irrelevant answers" exists. Users must critically assess if the explanation accurately aids their understanding of the underlying data-driven insight.
Transparency & Explainability:
Core Insights: System provides applied filters and metrics. Auto-generated Views aid validation. The "Recommended Insights" algorithms are complex.
GenAI fratures: This feature significantly enhances the transparency and understandability of insight cards by translating cryptic codes and providing contextual titles. However, the LLM's own reasoning process for generating a specific explanation remains a "black box." Transparency here is about the output's clarity and the user's ability to configure its use.
Error Handling: System provides warnings (e.g., if AI kill switch is on, cost/rate limits for LLM use are reached, or necessary models are not enabled).
Accountability: Users are responsible for validating insights and any decisions made based on them, including those aided by LLM explanations.