adeyemi@adediranadeyemi.com +234 816 273 5399
NLP · Python · Power BI · Africa Fintech

What 8,165 Reviews Reveal About M-KOPA's Biggest Trust Risk

A full-scope Python and Power BI case study analysing 8,165 M-KOPA Google Play Store reviews across 78 intents, 10 primary classes, and 14 months. NLP-driven intent classification, customer journey friction mapping, and composite risk scoring converge on one finding: M-KOPA's biggest retention threat is not poor service - it is what happens to customers after they pay.

Tools
Python · pandas · Power BI · NLP
Industry
Fintech · Device Financing · Africa
Type
Customer Analytics · Product Analytics · NLP
M-KOPA Google Play Store review analysis Power BI dashboard - overview tab showing 8,165 reviews, 4.2 average rating, and intent classification breakdown
8,165 Google Play Store reviews analysed
78 Unique customer intents classified
21,357 Thumbs on Appeal Loan Denial - the most community-endorsed pain point
50% of Offboarding-stage reviews are 1-2★

Project Overview

M-KOPA is one of Africa's largest device financing platforms, providing pay-as-you-go smartphones and solar energy products to over 4 million customers across Kenya, Uganda, Nigeria, Ghana, and South Africa. With a 4.2 average star rating across 8,165 reviews on the Google Play Store, the headline numbers look healthy. But headline numbers are where insight ends for most teams - and where this analysis begins.

This project analyses 8,165 M-KOPA Google Play Store reviews from January 2025 to February 2026, covering a dataset pre-enriched with NLP-based intent classification across 78 unique intents, 10 primary classes, and 7 customer journey stages. The analysis applies Python-based statistical analysis and an intent-weighted composite risk framework to answer a question that star ratings cannot: where in the customer journey is M-KOPA generating distrust, and which of those issues are policy problems versus product problems?

Central finding: M-KOPA's post-payment experience is systematically destroying the goodwill built during onboarding and usage. Customers who complete loan repayments - the most loyal, highest-lifetime-value segment - are encountering device locks, unresolved balances, and cash loan denials that 21,357 users have collectively thumbed up as serious grievances. This is a trust-extinction moment at precisely the point where a customer should be converting to a second loan or recommending the product to others.

The analysis also surfaces a silent majority problem: the "Unlock Device" intent has only 38 reviews but 6,556 thumbs - an average of 173 endorsements per review. The users who are experiencing device lock issues post-payment are so broadly validated by other users that the intent's review count dramatically understates its true prevalence.

Full Python analysis code, enriched dataset, and Power BI dashboard available on GitHub.
78 intents · 10 primary classes · 14-month trend analysis · composite risk scoring

View Repository

The Business Problem

For a fintech platform operating in a competitive, trust-sensitive market, a 4.2-star average hides more than it reveals. M-KOPA's model depends on loan repayment discipline and repeat loan uptake. A customer who repays their first device loan is not a completed transaction - they are the starting point for a cash loan relationship, a second device upgrade, and potentially years of recurring revenue. The question is whether the product experience at repayment completion is strong enough to convert a first-time borrower into a long-term financial services customer.

The raw review data held answers to strategic questions the product team could not answer from aggregate ratings:

  • Which specific intents account for the most community-validated pain - where high thumbsUpCount signals that many more users share the same issue than are writing reviews about it?
  • At which stage of the customer journey does sentiment collapse, and is that collapse driven by product failures or policy decisions?
  • Do support replies actually improve customer sentiment, or are some intents so policy-driven that support engagement makes scores worse?
  • Are there version-specific spikes in bug reports that signal regression patterns in the engineering release cycle?
  • What is the single word - or single moment - that keeps appearing across every high-risk intent cluster?

Answering these questions - and distinguishing between what customer service can fix and what requires a product or policy change - is the work this project sets out to do.

Dataset & Methodology

The dataset covers Google Play Store reviews submitted for the M-KOPA app between January 2025 and February 2026. Each review was pre-enriched with NLP-based intent classification fields before analysis.

FieldTypeDescription
reviewIdstringUnique identifier per review
contentstringRaw review text
scoreintegerStar rating (1-5)
thumbsUpCountintegerPlay Store helpfulness votes from other users
atdatetimeReview submission timestamp
replyContentstring / nullDeveloper response text (null if no reply)
repliedAtdatetime / nullTimestamp of developer reply
appVersionstringApp version at time of review
primary_classstringTop-level NLP classification (10 classes)
customer_stagestringJourney stage: Onboarding, Payment, Usage, Offboarding, Support, Default, Unknown
intentstringGranular NLP intent label (78 unique values)
Dataset MetricValue
Total reviews8,165
Date rangeJan 2025 - Feb 2026 (14 months)
Unique intents78
Primary classes10
Customer journey stages7
5★ reviews5,837 (71.5%)
1★ reviews1,212 (14.8%)
Reviews with developer reply~99% for high-volume intents

Analytical Methodology

1

Intent-Level Risk Profiling

Each of the 78 intents was profiled across: total review count, percentage of 1-2★ reviews, total thumbsUpCount weighted to low-star reviews, average rating, and standard deviation of ratings (to identify polarised vs consistently negative intents). Only intents with 10+ reviews were included in comparative analysis to avoid small-sample distortion.

2

Composite Risk Scoring

A composite risk score was computed per intent to move beyond single-metric ranking: Risk = (5 − avg_rating) × log(1 + total_thumbs) × (pct_low_star / 100). This formula penalises intents with both high distress (low rating) and broad community validation (high thumbs), while normalising for volume differences across intents. The result is a single sortable score that reflects the combination of severity, scale, and community endorsement.

3

Customer Journey Friction Mapping

Reviews were segmented by customer_stage to map where in the M-KOPA journey each intent concentrates. Low-star percentage per stage was computed to identify the highest-friction transition points. Stage-intent crosstabulation revealed which intents are structural (recurring across stages) vs stage-specific (concentrated at a single friction point).

4

Support Effectiveness Analysis

For intents with sufficient replied and unreplied review samples, average star ratings were compared between reviews that received a developer response and those that did not. A positive lift score indicates that responding to that intent's reviews is associated with higher ratings; a negative lift indicates that responses either do not help or actively worsen sentiment - pointing to issues that require product action, not support scripts.

5

Keyword Extraction & Root Cause Discovery

For each high-risk intent, the review corpus was tokenised, stop-words were removed, and word frequency counts were computed to identify the specific product moments or phrases recurring in negative reviews. This moves the analysis from "users are unhappy about device locks" to "users are unhappy because their phone was locked after completing payment" - a distinction with entirely different implications for the product team.

Intent Classification Framework

The most important analytical asset in this dataset is its NLP-based intent classification, which translates raw review text into actionable product signals. The taxonomy was designed to capture the granular purpose behind each review - not just its sentiment - so that product, support, and policy teams can act on specific customer needs rather than broad sentiment categories.

The 10 primary classes and their dominant intents are structured as follows:

Positive Experience (5,425 reviews)
  • Praise Service (5,323)
  • Praise Device (91)
  • Praise Feature (7)
Account / Loan Issues (1,420 reviews)
  • Appeal Loan Denial (427)
  • Request Loan (397)
  • Question Eligibility (245)
  • Complain Policy (97)
  • Complain Delay (53+)
Device Locking / Access Issues (358)
  • Uninstall App (199)
  • Complain Lock (46)
  • Unlock Device (37)
  • Dispute Lock (26)
  • Complain Restriction (18)
App Performance / Bugs (333)
  • Report Bug (161)
  • Update App (67)
  • Reset Password (52)
  • Check Balance (20)
  • Request Feature (18)
Trust / Reputation Issues
  • Flag Fraud (50)
  • Flag Unfair Practice (12)
  • Report Fraud (3)
  • Flag Privacy Issue (3)
Payment & Transactions
  • Dispute Charge (25)
  • Dispute Balance (10)
  • Complain Interest (29)
  • Complain Price (17)

Classification gap to note: The "Other / Noise" primary class contains 137 "Complain Service" reviews that would more accurately belong in "Customer Support." This misclassification suppresses the true volume of service complaints and understates the "Customer Support" class in any executive-level reporting. Any dashboard or KPI built on primary_class should apply a correction to this category before distribution to stakeholders.

Power BI Dashboard

The Power BI dashboard translates the Python analysis into an operational monitoring tool structured across two tabs: an executive Overview tab and a Customer Journey deep-dive tab.

M-KOPA Google Play Store review analysis Power BI dashboard - Overview tab showing total reviews, average rating, monthly volume trend, top 10 intent classification, and issues breakdown by primary class
Overview tab: 8,165 reviews · 4.2★ avg · Monthly volume trend peaking at 722 reviews (Apr '25) · Top 10 intents · Primary class issues breakdown.
M-KOPA Google Play Store review analysis Power BI dashboard - Customer Journey tab showing stage distribution, ratings breakdown by primary class, and customer voice review samples
Customer Journey tab: Stage distribution · Star-rating breakdown by primary class · Live customer voice with helpfulness ranking · 2,722 pages of reviews navigable.

Dashboard Design Decisions

The Overview tab was built for product and CX leadership: a single-screen view of headline metrics, monthly trend, and intent concentration. The Customer Journey tab was built for operational teams: the ratings-by-class stacked bar chart makes it immediately visible which primary classes are dominated by 1★ red, and the live "Customer Voice" panel enables direct review reading sorted by helpfulness - so the most community-endorsed complaints surface first, not the most recent.

The choice to sort by thumbsUpCount by default (rather than date) is a deliberate analytical decision: it prioritises the issues that the broadest community agrees on over the issues that happened to be written most recently. This is particularly important for a fintech product where a single frustrated review about a policy issue can attract hundreds of endorsements from users who share the same experience but have not written their own review.

Key Findings

Eight findings emerged from the full analysis. The most strategically important are summarised below, structured for direct product and business use.

21,357

The Most Community-Endorsed Issue Is a Policy Problem

Appeal Loan Denial accumulated 21,357 thumbsUpCount across 427 reviews - by far the most community-validated pain point in the dataset. 95% of these reviews come from the Offboarding stage, and developer replies make scores worse by 0.54★, confirming this cannot be resolved with support scripting.

173×

Unlock Device Is the Highest-Density Silent Majority Signal

Only 38 users wrote an Unlock Device review - but those 38 reviews accumulated 6,556 thumbs. At 173 thumbs per review, this is the strongest signal in the dataset that far more users are experiencing post-payment device lock issues than are writing about them.

50%

Half of All Offboarding-Stage Reviews Are 1-2★

The Offboarding stage (1,457 reviews) has a 50% low-star rate - the highest of any identifiable stage. This is the stage where customers have completed their device loan and are either requesting cash loans, waiting for clearance, or trying to exit the product. It is where M-KOPA most consistently fails its most loyal customers.

"after"

One Word Appears Across Every High-Risk Intent

Keyword extraction across the top-10 highest-risk intents reveals "after" as the single most cross-cutting term. Users are being locked out, denied loans, and frustrated after paying. The post-payment experience - not the onboarding or usage experience - is where M-KOPA's retention problem lives.

3,028

The Highest Single-Review Thumbs Count Is a Fraud Accusation

A single Flag Fraud review - describing the post-repayment cash loan experience as a scam - received 3,028 thumbsUpCount. This is the highest-endorsed individual review in the dataset and is sitting publicly on the Play Store for every prospective customer to see at the top of "Most Helpful" results.

+1.69★

Bug Reporting Is the Intent Where Replies Help Most

Report Bug reviews that received a developer reply average 1.69★ higher than those that did not. This is the most positive support lift in the dataset - and suggests that M-KOPA's tech support responses on app bugs are genuinely effective at partially recovering user sentiment.

Customer Journey Friction Map

The customer journey analysis reveals that M-KOPA's rating problem is not evenly distributed. It is concentrated at two specific stages that represent opposite ends of the relationship: where customers are trying to get more value out of the product (Offboarding / cash loans), and where something has gone wrong and they need help (Support).

Customer StageReviewsAvg Rating% Low-Star (1-2★)Dominant Friction Intents
Support1432.62★57%Reset Password, Complain Service
Offboarding1,4572.85★50%Appeal Loan Denial (405), Uninstall App (213)
Default403.30★38%Complain Lock, Request Feature
Usage4123.43★36%Report Bug (135), Praise Device (91)
Payment8733.89★25%Praise Service (342), Request Loan (143)
Onboarding963.90★24%Praise Service (52), Report Bug (13)
Unknown5,1444.69★5%Praise Service (4,838)

The Offboarding stage is a structural failure point, not expected friction. 1,457 reviews at 50% low-star represents nearly all of Appeal Loan Denial (405/427 reviews originate here) and almost all Uninstall App (213/216 reviews). This is not customers having problems during the product experience - it is customers who have successfully completed their loan repayment being denied the next product benefit they were expecting. That distinction matters enormously for how to fix it.

The Transition That Breaks Everything

The data points to a specific product transition that M-KOPA has not solved: the moment a customer moves from being an active device borrower to being an eligible cash loan applicant. During active repayment, customers are satisfied (Payment stage: 3.89★ avg, 25% low-star). After repayment, sentiment collapses (Offboarding: 2.85★ avg, 50% low-star). The product is not designed to celebrate the completion of a loan and smoothly transition the customer to the next financial product - it is leaving them in a state of confusion and perceived abandonment.

Risk Ranking: Top 15 Intents by Composite Score

The composite risk score combines rating severity, community validation, and low-star rate to produce a single sortable priority ranking. The top 15 intents by risk score are:

RankIntentAvg ★Total 👍% Low-StarRisk ScoreType
1Flag Unfair Practice1.08★1,687100%29.1Policy
2Flag Fraud1.70★3,16583%22.1Policy
3Complain Restriction1.33★40989%19.6Policy
4Report Defect1.88★1,49575%17.2Product
5Dispute Lock1.63★22889%16.3Policy
6Uninstall App2.06★2,09371%15.9Product
7Complain Lock2.08★1,53771%15.2Policy
8Complain Policy2.24★92165%12.2Policy
9Appeal Loan Denial2.79★21,35752%11.5Policy
10Dispute Balance2.50★1,55959%10.9Product
11Unlock Device2.87★6,55653%9.9Policy
12Report Bug2.68★1,04153%8.5UX/Tech

9 of the top 12 risk intents are policy or commercial model problems, not engineering problems. This is a critical product roadmap input: the majority of M-KOPA's most harmful user sentiment cannot be resolved by the engineering team shipping code. It requires commercial decisions about cash loan eligibility criteria, device lock policy after repayment, and how the post-repayment customer relationship is managed.

Support Effectiveness Analysis

A key question for any CX operation is whether customer-facing responses improve sentiment or simply add noise. The analysis compared average star ratings for reviews that received a developer reply versus those that did not, for each intent with sufficient samples in both groups.

IntentReply RateAvg ★ No ReplyAvg ★ With ReplyLiftImplication
Report Bug99.5%1.00★2.69★+1.69★Support is genuinely helping
Dispute Charge96.3%1.00★2.19★+1.19★Support is genuinely helping
Uninstall App99.5%1.00★2.07★+1.07★Support partially recovers
Unlock Device94.7%2.00★2.92★+0.92★Support partially recovers
Complain Lock95.8%2.50★2.07★-0.43★Replies make it worse - policy fix needed
Appeal Loan Denial99.3%3.33★2.79★-0.54★Replies make it worse - policy fix needed
Request Loan99.5%4.50★3.92★-0.58★Replies are disappointing users who expected approval
Report Defect98.5%3.00★1.86★-1.14★Biggest negative lift - device defects need product action

Negative support lift is the clearest signal that a product or policy fix is overdue. When responding to a review makes the average score go down, it means customers are writing back angrier after reading the response - because the response cannot give them what they need (a loan approval, a working device, a cleared repayment record). Continuing to respond to these intents with the same scripted answers is not a neutral activity. It is actively making scores worse.

Recommendations

Five recommendations emerge from the analysis, separated by who owns the fix and ranked by expected business impact. These are not surface-level suggestions - each comes with specific implementation logic derived from the data.

Priority 1 - Product + Commercial

Design the "Loan Completion Moment" as a Product Transition, Not a Silence

Target intents: Flag Fraud · Complain Lock · Flag Unfair Practice · Appeal Loan Denial
Combined thumbs: 27,562 on negative reviews
Stage: Offboarding

The data is unambiguous: customers who complete repayment experience M-KOPA as abandoning or penalising them precisely when they deserve a reward. The current product flow does nothing to celebrate repayment completion, nothing to automatically clear device locks within a defined SLA, and nothing to set clear expectations about cash loan eligibility before the customer arrives at a denial screen.

The fix requires three commercial decisions: (1) Set a published SLA for device unlock post-final-payment (24 hours is a reasonable starting point). (2) Make cash loan eligibility criteria transparent in the app before a customer ever reaches the application step - if they are not eligible, they should know why and what they need to do to become eligible, not encounter a silent denial. (3) Create a visible, in-app "loan completion celebration" flow that thanks the customer, confirms clearance, and surfaces the next product offering with clear qualification criteria.

Measurable outcome to track:

Target a 20-point reduction in Offboarding-stage low-star percentage (from 50% to 30%) within two quarters of implementing the completion flow and SLA.

Priority 1 - Engineering

Automate Device Unlock Confirmation and Build a Self-Service Clearance Status Screen

Target intents: Unlock Device · Dispute Lock · Dispute Balance · Complain Lock
Combined thumbs: 9,780 on negative reviews
Avg thumbs/review: 173× for Unlock Device alone

The Unlock Device intent has 173 thumbs per review - meaning for every user who writes a review, approximately 173 others are confirming they have the same problem without writing about it. This is not a niche edge case; it is a widespread operational failure that is invisible in aggregate star ratings because it concentrates in a small number of highly-endorsed reviews.

The engineering fix is specific: build a real-time clearance status screen in the M-KOPA app that shows the customer their payment reconciliation state, expected unlock timing, and a one-tap "unlock confirmation request" button. Currently users call support or write a Play Store review to get status updates on their own device. Both of those friction paths are unnecessary. The status data already exists in M-KOPA's systems - the problem is that it is not surfaced to the customer.

Measurable outcome to track:

Monitor Unlock Device and Dispute Lock thumbsUpCount on new reviews as the leading indicator of resolution. A declining thumbs rate on these intents (rather than declining review count) confirms the problem is being resolved in the real world, not just suppressed.

Priority 2 - Support Operations

Stop Scripting Support Responses for Policy-Driven Intents - Redirect to a Product Fix Escalation Path Instead

Target intents: Appeal Loan Denial · Complain Lock · Report Defect · Request Loan
Reply-driven score lift: -0.54★ to -1.14★

The support effectiveness analysis shows that responding to Appeal Loan Denial, Complain Lock, Report Defect, and Request Loan reviews makes average scores worse. This happens because the scripted response cannot provide what the customer actually needs - a loan approval decision, a cleared device, a working Nokia handset. The response creates a second disappointment on top of the original one.

The short-term operational fix: stop deploying generic "we're sorry, please contact support" responses to these intents and replace them with a response that (a) acknowledges the specific issue rather than using a template, and (b) includes a direct escalation path to the relevant team (device replacement team, loan review team, balance reconciliation team) rather than asking the user to navigate a general support channel. The longer-term fix is to reduce the volume of these intents through the product changes described in Priorities 1 and 2.

Measurable outcome to track:

Track average star rating for replied reviews in these four intents monthly. The current negative lift should move toward neutral or positive within 60 days of response template revision.

Priority 2 - Device/Hardware

Investigate Nokia-Specific Defect Rate and Build a Visible Warranty Resolution Flow

Target intent: Report Defect (65 reviews · avg 1.88★ · 1,495 👍)
Key keyword: "Nokia" appears in top-10 terms for this intent

Report Defect reviews name Nokia explicitly in their top keywords - a signal that defect complaints are concentrated in a specific device model range, not randomly distributed across the product catalogue. The combination of 1.88★ average, 1,495 thumbs, and a -1.14★ reply-driven score drop (the worst in the entire dataset) makes this the most acute product quality problem in the review corpus.

The business action is not just an engineering task: it requires pulling defect-by-model data from the support CRM, isolating the Nokia models with the highest return or complaint rates, and making a commercial decision about whether to continue offering those specific models to new customers while the defect pattern is under investigation. In parallel, build an in-app warranty claim flow so users with defective devices have a visible self-service path that does not require them to write a Play Store review to get attention.

Data needed to proceed:

CRM data by device model: fault type, replacement rate, support ticket volume. This data almost certainly exists in M-KOPA's operational systems but was not part of the review dataset.

Priority 3 - Engineering

Implement App Version Regression Testing with a Review-Signal Feedback Loop

Target intent: Report Bug (205 reviews · version 2025.117.4 had 16 bug reports - the highest spike)
Reply lift: +1.69★ - the most support-responsive intent in the dataset

Bug report volumes spike in identifiable version ranges (2025.117.x through 2025.232.x had the highest per-version bug report counts). This is a regression testing gap: releases are shipping with issues that users are encountering in production before the engineering team has caught them internally.

The practical fix is a two-part loop: (1) Add a post-release monitoring trigger that flags any version where Report Bug intent volume in Play Store reviews exceeds a threshold within 7 days of a release (e.g., 5+ bug reports per version within the first week is a yellow flag; 10+ is a red flag requiring a hotfix review). (2) Since replies on bug reports achieve the highest score recovery in the dataset (+1.69★), prioritise developer acknowledgment within 24 hours for all one-star bug reports from new versions.

Measurable outcome to track:

Average bug reports per version release as a rolling 3-month metric. Declining bug report density per release (not just absolute count) confirms the regression testing improvements are working.

Tools & Technologies

This project was built in Python for analysis and statistical computation, with a Power BI layer for operational dashboarding - a combination designed to serve both the analytical rigour required for a case study and the executive accessibility required for business adoption.

Python Core analysis
pandas Data wrangling
Power BI Dashboard
NLP / Text Intent mining
scipy / stats Significance tests
matplotlib Visualisation
Python pandas NLP Intent Classification Power BI Customer Journey Mapping Composite Risk Scoring Keyword Extraction Sentiment Analysis thumbsUpCount Weighting Fintech Analytics Africa Market Analytics Google Play Store Mining Product Analytics CX Analytics App Store Review Analysis

Work with Adediran Adeyemi

Does your fintech or app have 1,000 reviews you have never properly read?

I help product teams and fintech companies turn unstructured customer feedback into structured product decisions - from intent classification to churn signals to journey friction mapping. First call is free.