System Card: Celonis Insight Explorer
April 29, 2025
System Name: Celonis Insight ExplorerInsight Explorer
System Version: GA
1. System Overview
Description: Celonis Insight Explorer is a Studio Asset within the Celonis Execution Management System (EMS). It connects to an organization's Knowledge Model to automatically discover and surface potentially significant observations ("insights") from user-selected metrics. It facilitates further analysis by auto-generating Celonis Views for each insight.
AI Integration: Insight Explorer employs a multi-layered AI approach:
Core Insight Generation: Utilizes the "Recommended Insights" AI platform capability. This backend system uses intelligent algorithms and statistical techniques (e.g., subgroup discovery, time series analysis) to identify attribute-based, process-based, and trend-based insights.
Insight Explanation: Leverages integrated Large Language Model (LLM) capabilities within the Celonis platform. These LLM functions are used to generate more understandable, contextual titles for insight cards and provide natural language explanations for filter names and values (especially for cryptic source-system codes like SAP Tcodes), including localization. This feature is configurable by users.
User Interaction: Users (Analysts, COE Leads, Value Engineers) configure an Insight Explorer asset by selecting metrics (and optionally attributes, filters) from their Knowledge Model. The system presents a list of AI-generated insights.
Users can review these insights (with potentially LLM-generated titles and filter explanations if enabled), star relevant ones, hide irrelevant ones.
Users can click on an insight to open an auto-generated, contextualized View for deeper validation and analysis. These Views can be exported.
LLM-assisted explanations are typically enabled via the Insight Explorer (advanced) configuration page.
2. AI Model Dependency
Model Name & Version:
Recommended Insights: This platform capability uses a collection of intelligent algorithms and statistical methods. Specific underlying algorithmic versions are managed by Celonis.
Large Language Models (LLMs): Integrated LLM capabilities are used for insight title generation and filter explanations. Future plans may allow users to select preferred underlying models and verbosity for these functions.
By default it uses these models. Other models (e.g. BYOM) can be configured by administrators
Impact of Model Output:
Recommended Insights Output: Generates ranked "insights" identifying potential areas of process improvement, significant trends, or correlations. This guides users' analytical focus. It also auto-generates Celonis Views for validation.
LLM-Generated Output: Produces natural language titles and explanations for insight cards and their filters. This enhances the understandability and accessibility of the core insights, especially for users not deeply familiar with underlying source system terminologies. It can also provide localized explanations.
3. Data Flow
Data Inputs:
User Configuration: Selected metrics, attributes (optional), and PQL filters (optional) from a Celonis Knowledge Model. User preference to enable/disable LLM explanations, and potentially language/verbosity settings.
Knowledge Model: Definitions, metadata (e.g., KPI desired direction, formats), and relationships.
Data Model: The underlying case-centric or object-centric process data.
Generated Insights (as input to LLM): The insights produced by the "Recommended Insights" capability serve as input for the LLM explanation functionality.
Data Processing:
Recommended Insights AI:
Knowledge Readiness checks (validating metric/attribute compatibility).
Intelligent Attribute Recommendation (optional).
Insight Generation algorithms identify patterns and rank insights by potential impact.
LLM Processing (for explanations, if enabled):
Generated insight details (e.g., filters, metric context) are processed using integrated LLM capabilities.
The LLM functions generate a new title and/or explanations for filter components.
The LLM capabilities also perform language translation for localization if requested/configured.
Data Outputs:
A list of ranked insights, presented as interactive cards. Titles may be LLM-generated.
LLM-generated explanations for filter names and values (e.g., as tooltips or collapsible sections), available in the configured language.
Auto-generated Celonis Views for each selected insight, aiding validation.
4. Human Oversight & Control
Level of Automation: Human-in-the-loop / Augmented Intelligence. The system automates insight discovery and, optionally, explanation generation, but human expertise is indispensable.
Human Intervention Points:
Configuration: Users select Knowledge Model, metrics, attributes, filters. Users enable/disable LLM-generated explanations.
(Future) LLM Configuration: Users may select LLM models and verbosity for the explanation features.
Evaluation of Insights & Explanations: Users must critically evaluate the relevance, validity, and accuracy of each generated insight and its LLM-generated explanation. The system "cannot produce relevant insights with 100% precision, nor can it distinguish correlation from causation." LLMs also "can produce unexpected or irrelevant answers." Subject matter expertise is paramount.
Prioritization: Users "star" insights and can "hide" others.
Validation in Views: Users interact with auto-generated Views for detailed validation.
(Future) Feedback: Users may be able to provide feedback (e.g., like/dislike) on LLM-generated explanations.
Decision to Act: Users determine if an insight represents a true business opportunity.
Monitoring & Evaluation:
Users directly evaluate insights and explanations. Future feedback mechanisms will contribute to the evaluation of LLM outputs.
Celonis may track aggregated usage metrics.
If underlying data/knowledge changes and insights are recomputed, LLM explanations are also recomputed.
5. Safety & Security
Data Security: Operates within the Celonis EMS security framework, respecting user permissions (Edit Package, Use Data Model). Insights and explanations are generated based on data the user is authorized to access. Standard Celonis data encryption and access controls apply.
AI Kill Switch: An AI kill switch, if enabled at a higher level, would prevent users from enabling LLM explanations.
Data Privacy (for LLM features): Users must agree to the Celonis AI Addendum to use LLM-assisted features. Procedures for the deletion of LLM-generated data (e.g., upon AI kill switch activation or customer request) are important considerations.
System Reliability: Dependent on the Celonis EMS platform, "Recommended Insights" services, and the availability of the integrated LLM capabilities.
6. Ethical Considerations
Fairness & Non-Discrimination:
Core Insights: Risk of misleading insights if data/configuration is biased or if users misinterpret correlation. Human evaluation is key.
LLM Explanations: LLMs could potentially generate inaccurate or subtly biased explanations, though the primary aim is to clarify technical terms. Minimized through careful prompting, but risk of "unexpected or irrelevant answers" exists. Users must critically assess if the explanation accurately aids their understanding of the underlying data-driven insight.
Transparency & Explainability:
Core Insights: System provides applied filters and metrics. Auto-generated Views aid validation. The "Recommended Insights" algorithms are complex.
LLM Explanations: This feature significantly enhances the transparency and understandability of insight cards by translating cryptic codes and providing contextual titles. However, the LLM's own reasoning process for generating a specific explanation remains a "black box." Transparency here is about the output's clarity and the user's ability to configure its use.
Error Handling: System provides warnings (e.g., if AI kill switch is on, cost/rate limits for LLM use are reached, or necessary models are not enabled).
Accountability: Users are responsible for validating insights and any decisions made based on them, including those aided by LLM explanations.