Success rate of qualifying questions asked by reps
Per call: asked.length / (asked.length + missed.length) × 100. Then: avg across all calls
Frontend: qaAnalyticsService.tscalculateQualifyingQuestionsRate() — per-call ratio averaged, rounded to 1 decimal. Legacy fallback: compliance_rate field
Coaching ROI Potential (aka Revenue Increase Potential)
Potential revenue uplift from coaching interventions on underperforming calls with low-probability deals
sum(deal_value × (1 − probability)) for calls where total_score_percent < 70 AND probability < 0.7
Frontend: MainQADashboard.tsx — filters calls by QA score < 70% and probability < 70%, sums deal_value × (1 − probability). Backend: qa_aggregation_service.py uses same logic
Thresholds: QA score < 70%, probability < 0.7 (70%)
Deals at Risk
Count of deals with a composite risk score above the at-risk threshold
Count of deals where riskScore > 0.75. The risk score combines close probability (40%), call quality (25%), risk factor count (up to 20%), and customer sentiment (15%) into a single 0–1 value
Frontend: MainQADashboard.tsx — computes riskScore per deal via calculateRiskScore(), counts deals exceeding 0.75. Backend: qa_aggregation_service.py uses same logic
Average scores across 6 QA categories, derived from 16 grading elements
Each category = average of mapped element scores across all calls. Categories and element mappings: (1) Call Opening & Closing = avg(Greetings & Intro, Call Closing); (2) Verification = avg(Verification & Validation); (3) Product Knowledge & Selling = avg(Needs Assessment, Product/Service Presentation, Objection Handling, Sales Tenacity, Closing Technique, Upsell & Cross-sell); (4) Communication Skills = avg(Clarity of Verbal Expression, Active Listening, Telephony Skill & Etiquette); (5) Include NPS = avg(NPS Prompt); (6) Operational System Proficiency = avg(Activity Log, Opportunity Handling, Respond to Customer Queries)
Frontend: qaAnalyticsService.tscalculateQACategories() — maps each grading element name (case-insensitive includes) to a category, sums scores, divides by (element count × total calls)
Element name matching is case-insensitive substring
Rep Score Distribution
Agent performance spread with individual rankings
Per call: for each grading element with weight > 0 and not in N/A list, apply score >= 75 ? 100 : 0, then sum(passFailScore × weight) / totalWeight. Per agent: avg(call scores). Agents sorted by score descending and ranked
Quality issues requiring attention with priority indicators
Alerts generated from call analysis: High priority for PDPA compliance breach (pdpa_pass = false) or low call score (total_score_percent < 50). Alerts filtered by current rep/date selection. Display capped at 10
Backend: qa_aggregation_service.py_aggregate() — generates alerts. Frontend: qaAnalyticsService.tsgetRecentAlerts() — filters by active call IDs/agents, maps priority to display
High: PDPA breach or score < 50. Display limit: 10
sentiment_analysis.overall_sentiment × 100 (displayed as 0-100%)
Frontend: CallAnalysisDashboard.tsx — Math.round(call.sentiment_analysis.overall_sentiment * 100). Backend: ai_analyzer.py — keyword-based fallback: (positive_count - negative_count) / (positive_count + negative_count), normalized from [-1,1] to [0,1] via (sentiment + 1) / 2
Range: 0-100%
Sentiment Progression
Sentiment change over the course of a call
Transcript split into 5 equal chunks; average sentiment per chunk produces a 5-point progression array
Backend: ai_analyzer.py — divides segment_sentiments into 5 chunks, averages each. Frontend: transcriptService.ts — linear interpolation to match transcript segment count
Max 5 data points
Key Sentiment Moments
Points in the call where sentiment shifted significantly
Segments where abs(current_sentiment - previous_sentiment) > 0.2
Frontend: transcriptService.ts — flags segments exceeding the threshold
avg(element_scores) per call, then mean across calls
Communication Skills
Clarity of Verbal Expression, Active Listening, Telephony Skill & Etiquette
avg(element_scores) per call, then mean across calls
Include NPS
NPS Prompt
avg(element_score) across calls
Operational System Proficiency
Activity Log, Opportunity Handling, Respond to Customer Queries
avg(element_scores) per call, then mean across calls
Implementation: Frontend: AgentPerformanceDashboard.tsx — getScoreFromGrading() finds grading element by case-insensitive substring match, computes per-call category average, then averages across calls. Individual rep radar overlays agent values against team averages.
QA elements needing most improvement across the team
Bottom 5 grading elements by average score across all calls. Priority: High if avg_score < 60, Medium otherwise. Affected agents: agents with at least one call where the element score < 75
Top 3 grading elements by average score for that agent's calls
Backend: qa_aggregation_service.py_find_strengths() — groups element scores per agent, sorts descending, takes top 3
Limit: 3
Agent Improvement Areas
Lowest scoring areas for an individual rep
Bottom 3 grading elements by average score for that agent's calls
Backend: qa_aggregation_service.py_find_weak_areas() — groups element scores per agent, sorts ascending, takes bottom 3
Limit: 3
Coaching Eligibility
Reps who would benefit from coaching
Agents whose avg(total_score_percent) < 75
Backend: qa_aggregation_service.py — filters agents with average score below threshold, returns focus areas and coaching score. Limit: 5 agents
Threshold: avg < 75. Limit: 5
Immediate Actions
Most urgent skill gaps to address
Grading elements with score < 50 or questions in qualifying_questions.missed, aggregated by frequency, top 3 returned
Backend: qa_aggregation_service.py — counts occurrences of low-scoring elements and missed questions, sorts by frequency descending, formats as "Improve {element}"
sum(recurring_charge + one_off_charge) from SFDC action data per call. Falls back to sum(deal_potential.estimated_value) when SFDC data unavailable
Frontend: DealAnalysisDashboard.tsx — fetches SFDC action per call via /api/v1/calls/{id}/sfdc-action, sums recurring + one_off. Backend fallback: qa_aggregation_service.py sums deal_potential.estimated_value
—
High Probability Deals
Count of deals likely to close
Primary: count where probability > 0.7. Alternative: count where deal_value > 10000 AND probability > 0.6
Frontend: DealAnalysisDashboard.tsx — KPI card uses probability > 0.7 only
Primary: > 0.7. Alt: > 10000 and > 0.6
At-Risk Deals
Count of deals with a composite risk score above the at-risk threshold
Count of deals where riskScore > 0.75. The risk score is a composite of close probability (40%), call quality (25%), risk factor count (up to 20%), and customer sentiment (15%)
Frontend: DealAnalysisDashboard.tsx — computes riskScore per deal via calculateRiskScore(), counts those exceeding 0.75
Threshold: riskScore > 0.75
Monthly Revenue Projection
Expected monthly revenue
totalPipelineValue × 0.33
Frontend: DealAnalysisDashboard.tsx
Factor: 0.33
Coaching ROI Potential (aka Revenue Increase Potential)
Revenue uplift from coaching interventions on underperforming, low-probability deals
sum(deal_value × (1 − probability)) for calls where total_score_percent < 70 AND probability < 0.7
Frontend: DealAnalysisDashboard.tsx — filters by QA score < 70% and probability < 70%, sums deal_value × (1 − probability)
Frontend: DealAnalysisDashboard.tsx — risk distribution calculation
Low: <= 0.4, Medium: 0.4–0.75, High: > 0.75
How the Risk Score works (plain language):
The risk score is a single number between 0 and 1. It adds four pieces together, each making the score higher when things look worse:
Close chance (40% of the score): The lower the probability the deal will close, the more risk. "Unlikely to close" pushes the score up.
Call quality (25% of the score): The lower the QA score for the call, the more risk. "Poor call" pushes the score up.
Number of risk factors (up to 20% of the score): Each risk factor (e.g., contract issues, competitor mentions) adds a bit. The more factors the higher this part, but it stops increasing after a point (capped at 20%).
Customer sentiment (15% of the score): The more negative the customer sounded on the call, the more risk. "Unhappy customer" pushes the score up.
The four parts are added together and the result is capped at 1.0. A deal is classified as "at risk" when this number exceeds 0.75.
AI-estimated deal value when CRM pricing unavailable
GPT estimates deal value in SGD based on products discussed; set to 0 if no pricing discussed
Backend: ai_analyzer.py — prompt instructs: "Deal value (in SGD) — set to 0 if no pricing was discussed"
—
Deal Probability
Likelihood of deal closing
AI-generated win probability on a 0-1 scale
Backend: ai_analyzer.py — GPT estimates based on call context. Output: deal_potential.probability (0-1)
Range: 0-1
Product Interest Level
Customer's interest in a specific product
AI-classified from transcript: High = customer explicitly agreed or showed enthusiasm; Medium = customer asked questions without committing; Low = customer hesitated or declined
Backend: ai_analyzer.py — prompt: "Base this strictly on what the customer said — do NOT infer or assume"
The NexusAI architecture splits KPI computation between the backend aggregation service and the frontend dashboard components. Understanding which layer computes what is critical for debugging discrepancies.