An app can say “A2” on a badge, but that doesn’t mean you’ll be A2 when you finish the course. It’s a bit like a treadmill that claims you “ran a 10K” because you walked on it for an hour. Time and labels aren’t the same as ability.
If you’re about to pay for a subscription, you want proof. Not promises, not vibes, not a progress bar that fills up fast. This guide shows how to check whether a CEFR language app is genuinely aligned to A1–C1, using evidence you can spot during a free trial, a demo, or even screenshots.
First, what “CEFR-aligned” should mean (in plain terms)
CEFR levels describe what you can do with a language in real situations, across reading, listening, speaking, and writing. They aren’t “chapters” or “days of study.”
Start with the official level descriptions from the Council of Europe and keep them open while you evaluate apps: CEFR level descriptions. If an app’s A2 content doesn’t resemble A2 “can-do” skills, the label is marketing.
A quick reference point is the official global scale: Common Reference Levels (global scale). An A1 learner handles simple personal info, an A2 learner manages routine tasks, a B1 learner can deal with many travel and daily-life situations, and C1 is able to use language flexibly for work and study.
The fastest way to sniff out weak CEFR claims (before the trial ends)
Here’s what strong alignment tends to look like, and what should make you pause.
Signs the CEFR mapping is serious
Clear “can-do” outcomes per unit: Lessons say what you’ll be able to do (book a room, describe symptoms, summarize a short article), not just “Food 3” or “Past tense.”
Skill coverage is balanced: The app doesn’t only drill recognition (multiple choice) but also pushes output (speaking and writing) with feedback.
Level placement is explained: If there’s a placement test, it tells you what it measures and what the results mean.
Progress ties to tasks, not streaks: The app shows mastery of functions (asking follow-up questions, narrating events), not only points.
Red flags that often mean “CEFR-ish”
No level boundaries: You can’t find where A1 ends and A2 begins, or what changes at each stage.
Everything looks the same: The “B1” lessons still feel like A1, short phrases and picture taps, with little extended listening or reading.
Big claims with no method: “Reach C1 fast” without showing a syllabus, outcomes, or assessment approach.
If you like comparing app styles, this kind of feature-based thinking is also useful when choosing between popular tools (even when CEFR isn’t the headline), see Rosetta Stone vs Duolingo: detailed comparison.
What to check inside a free trial (screenshots you should hunt for)
Don’t just “try a few lessons.” Run a mini audit. You’re paying for a course, so you want course-level evidence.
1) A published syllabus that names the level and the goal
Look for a course map that shows:
- A1, A2, B1, B2, C1 as separate paths (or clearly marked sections)
- what grammar and functions appear at each level
- what kinds of texts and audio you’ll handle
If the app won’t show a syllabus until after payment, treat that as a warning sign.
2) CEFR-style tasks, not only app-style exercises
CEFR is about performance. In the trial, find tasks that resemble real use:
A1–A2: short dialogues, simple forms, basic messages, listening for key details.
B1: longer conversations, short stories, instructions, giving reasons, describing experiences.
B2: opinion pieces, problem-solving discussions, summarizing arguments, more natural-speed audio.
C1: long-form listening and reading, nuanced tone, structured writing, professional or academic topics.
If everything is one-sentence prompts, it’s hard to believe a real B2–C1 claim.
3) Speaking and writing that’s actually assessed
Many apps are strongest in reading and listening because those are easier to score. Speaking and writing take more work.
Inside the trial, ask:
- Does speaking feedback go beyond “good job” (for example, specific sounds, stress, or word endings)?
- Are you asked to speak in longer turns (20–60 seconds), not just repeat a word?
- Is writing more than fill-in-the-blank?
- Do you get corrections that explain what went wrong?
If speaking and writing are missing, the app may still be useful, but its CEFR claims should be treated as partial.
4) A level definition you can compare to official CEFR “can-do” grids
A simple cross-check is the Council of Europe self-assessment grid: Self-assessment grid (CEFR Table 2).
Pick one row (say, spoken interaction at B1). Then look for app lessons that practice that exact ability. If you can’t find them, the mapping may be loose.
Independent ways to confirm your level (so you’re not trusting the app’s meter)
Think of an app’s “level” like a fitness watch estimate. Helpful, but not final.
Use at least one outside check:
- For English learners, try an independent placement test like the EU Academy option: English Placement Test.
- Another option that reports CEFR levels is Tracktest.
Also do a reality check using official “can-do” statements from the European Language Portfolio: Self-assessment Grids (ELP). If the app says you’re B1, you should be able to do several B1 tasks without heavy support.
Important limitation: most quick placement tests focus on grammar and vocabulary, plus reading and listening. They may rate your receptive skills higher than your speaking and writing. That gap is normal, but it matters if you’re paying for “B2 speaking.”
Printable checklist: “Is this app really CEFR-aligned?”
Use this like a download you keep in your notes. Check what you can verify in the trial.
- The app shows an A1–C1 roadmap (or clear level boundaries).
- Each level lists “can-do” outcomes, not only topics.
- Lessons include longer listening and reading as levels rise.
- Speaking practice includes feedback that’s specific.
- Writing practice includes corrections or model answers.
- Assessments exist at the end of units or levels (not just daily quizzes).
- You can see sample tasks for the level you plan to buy.
- The app explains what “CEFR-aligned” means in its own product.
- Progress is tied to skills mastered, not streaks.
- An independent test or self-assessment roughly agrees with the app’s level.
0–10 CEFR alignment scorecard (quick, fair, and hard to game)
Score each line 0 (no), 0.5 (partly), or 1 (yes). Total out of 10.
| Criterion (evidence you can see) | Score (0–1) |
|---|---|
| Level roadmap with clear A1–C1 boundaries | |
| “Can-do” outcomes written per unit/level | |
| Tasks match CEFR-style real-life use (not only drills) | |
| Level-up tests that cover more than vocabulary | |
| Listening grows in length and speed by level | |
| Reading grows in length and complexity by level | |
| Speaking assessed with actionable feedback | |
| Writing assessed with corrections or strong models | |
| Level claims match official CEFR descriptions | |
| Independent check (test or self-assessment) supports the level | |
| Total (0–10) |
A practical rule: 8–10 looks solid, 5–7 is mixed but may still be worth it for the right goal, 0–4 suggests the CEFR label is mostly a tag.
Conclusion: pay for proof, not labels
A CEFR language app that’s truly aligned will show its work: clear level outcomes, real tasks, and assessments that look like the abilities CEFR describes. If the trial hides the syllabus or avoids speaking and writing, you’re not seeing the full story.
Before you subscribe, run the checklist, score the app out of 10, then confirm your level with at least one outside method. You’ll spend less money fixing gaps later, and you’ll know what you’re actually buying: ability, not badges.