Each component is normalized to 0–100, then weighted. Score range: 0 to 100.
Weights: NPS (35%) — Client voice and satisfaction. NSS (35%) — Business outcomes. DFS (30%) — Domain expertise and competitive moat.
Completeness Penalty: Missing components contribute 0 to the score — their weight is lost, naturally penalizing incomplete evaluations. For example, a client with only NPS and NSS (no DFS) can reach a maximum of 70, not 100.
Normalization:
NPS: Promoter = 100, Passive = 40, Detractor = 0
NSS: Weighted score is already 0–100 (used directly)
DFS: (Score ÷ 54) × 100
1. Net Promoter Score (NPS)
Client Voice
What it measures: Client satisfaction with our services. How likely they are to recommend us.
Scale: 0–10
0–6 Detractor: Dissatisfied. Risk of churn or negative referral.
7–8 Passive: Satisfied but not enthusiastic. Vulnerable to competition.
What it measures: Whether the client is achieving business success with our services. Select a primary goal from 7 options: Build an MVP, Team Extension, Legacy Modernization, Support & Maintenance, Product Scaling, Digital Transformation, or Security & Compliance. Each goal has 4 specific signal questions, plus 4 universal negative signals.
Weighted Scoring (0–100): Each of the 4 goal signals has a weight (must sum to 100%). The answer multiplier determines contribution: Yes = 100%, I Don't Know = 0%, No = 0%. Each signal's contribution = Weight × Answer multiplier. The T-score is the sum of all signal contributions before negative modifiers.
Negative Signal Modifiers: 4 universal negative signals apply deductions when answered Yes. Each negative flag deducts 50 points from the score (floor at 0). Two or more flags guarantee an Unsuccessful outcome.
Classification (threshold-based):
75–100 Successful: Client is clearly achieving business success with our services.
50–74 Unknown: Insufficient evidence to determine direction. Needs attention.
0–49 Unsuccessful: Client is not achieving the intended business outcomes.
Normalization: The weighted score is already 0–100 and is used directly in the Essentiality formula.
Portfolio NSS: (% Successful − % Unsuccessful) × 100. Range: −100 to +100. Mirrors the NPS formula with Successful = Promoter and Unsuccessful = Detractor.
Evidence enforcement: Every Yes or No answer requires a written evidence artifact. If the evidence field is left blank, the answer is discarded from scoring (contributes 0). No evidence = no signal.
Admin Configuration: Administrators can adjust signal questions, weights, negative modifier values, answer multipliers, and score thresholds per goal via the NSS Config page. Changes can be saved for future evaluations only, or applied retroactively to recalculate all historical scores.
Challenge Review: An independent reviewer can validate the classification with Approved, Rejected, or Approved with Notes.
3. Domain Fluency Score (DFS)
Competitive Moat
What it measures: How deeply embedded our team is in the client's domain. This is often the key differentiator that makes us irreplaceable.
Rubric Scoring (0–3 per criterion): 18 criteria across 4 levels. Each scored 0 (Not demonstrated), 1 (Emerging), 2 (Competent), 3 (Fluent). Max score: 54.
L0 — Industry Literacy (4 criteria, max 12): Industry segment, regulatory awareness, competitive landscape, industry trends.
L1 — Vocabulary & Language (3 criteria, max 9): Client terminology, internal language mirroring, communication style adaptation.
41–54 Domain-fluent (76–100%): Deep domain expertise. Very high barrier to replacement.
Normalization: (Score ÷ 54) × 100 = 0–100
Portfolio Aggregation
Active Clients Only: Portfolio dashboard, Essentiality Scores, and Leaderboard consider only active clients.
Portfolio NPS: (% Promoters − % Detractors) × 100. Standard NPS formula. Range: −100 to +100.
Portfolio NSS: Average of per-client weighted NSS scores (0–100 scale). Clients are classified as Successful (≥75), Unknown (50–74), or Unsuccessful (<50) based on their score.
Portfolio DFS: Average of per-client normalized Domain Fluency scores (0–100 scale, consistent with NPS and NSS normalization).
Portfolio Essentiality: Average of per-client Essentiality Scores. Note: the three summary components (NPS, NSS, DFS) use portfolio-level formulas that differ from per-client normalization, so they do not directly sum to the Essentiality Score.
Leaderboard: Ranks Delivery Managers, Project Managers, and Tech Leads by the average Essentiality Score of their assigned active clients. Only clients with at least one evaluation component are included.
Domain Experts: Groups team members by business domain, ranked by average DFS. Unlike the Portfolio and Leaderboard, Domain Experts includes all clients (active and deactivated) because domain expertise is a property of the person, not the client's active status. Uses official DFS classifications: Domain-blind (0–13), Surface-level (14–27), Domain-competent (28–40), Domain-fluent (41–54).
Evaluation Wizard
Purpose: A guided evaluation workflow that consolidates NPS, NSS, DFS, and PAM into a single step-by-step process per client per quarter, replacing the need to navigate between separate evaluation pages.
Overview Table: Displays active clients in rows and quarters (Q1–Q4) in columns for the selected year. Each cell shows the Essentiality Score if all 3 components are complete, or an “Evaluate” button with a partial completion indicator (e.g. “1/3”) if not.
Wizard Steps (4-step flow):
Step 1 — PAM: Confirm, add, or remove Partnership Alignment Meeting dates for the quarter. PAM is informational tracking only and does not affect the Essentiality Score.
Step 2 — NPS: Enter or confirm the Net Promoter Score (0–10). The previous quarter’s NPS is shown for reference. Existing data is pre-filled for confirmation.
Step 3 — NSS: Complete or confirm the Net Success Score evaluation with all signal questions and evidence. A “Copy from previous quarter” button pre-fills answers from the last quarter’s NSS evaluation when no current data exists. All signals must be answered and evidence provided for Yes/No answers before saving.
Step 4 — DFS: Domain Fluency is evaluated by external reviewers, not through this wizard. If a valid DFS score exists for the current calendar year, it is displayed for acknowledgment. If no DFS score exists, the wizard shows a message to approach GSDO leadership and allows proceeding without DFS (the score will be penalized accordingly).
DFS Validity: DFS evaluations are valid for the entire calendar year in which they were recorded. A DFS score from any quarter in 2026 is considered valid for all quarters in 2026.
Score Summary: After completing all steps, the wizard displays the final Essentiality Score with a breakdown of all three components (NPS, NSS, DFS) showing raw scores, normalized values, and classifications.
Permissions: Admin, Delivery Manager, and Project Manager roles can evaluate their assigned clients. Tech Leads can view the overview table but cannot initiate evaluations.
PAM Tracker
Purpose: Tracks Partnership Alignment Meetings across all active clients by month and year. PAM meetings are a key operational practice but are not part of the Essentiality Score calculation.
Calendar Grid: Displays active clients in rows and months (Jan–Dec) in columns for the selected year. Each cell shows PAM date chips that can be added or removed.
KPI: Shows the percentage of active projects that had at least one PAM in the selected year (e.g. “75% of projects with PAM (9/12)”).
Permissions: Admin, Delivery Manager, and Project Manager roles can add and remove PAM dates. Tech Leads have read-only access.