Should a Montessori school use MAP Growth?
The autumn cycle results land in the head of school's inbox on a Tuesday in October. NWEA MAP Growth, all Environments, all goal areas, percentile bands, growth projections, the lot. The head of school reads them with the kind of attention these reports deserve and the kind they almost never receive. By Friday the file is in the shared drive. By February it has been opened twice: once when a parent asked, and once when the board wanted a slide.
Then the spring cycle runs, and the same thing happens.
This is the most common pattern in Montessori schools that have adopted standardised assessment. The data exists. It is technically accurate. It is largely inert.
The defensive answer to "should a Montessori school use MAP" is that Montessori is individualised and MAP is standardised, so the two are incompatible by design. That answer is comforting and not quite right. MAP and Montessori are compatible at the level of measurement: a child's reading comprehension is a real thing, the test is a reasonable instrument for getting at it, and a Montessori child sitting it once or twice a year is not betraying the pedagogy. There are good reasons to take the test, and there are honest ways to use the result.
The harder question is the one schools rarely ask out loud: what did this data change on Monday morning?
If the answer is "nothing", the school is paying for evaluation theatre. The instrument is fine. The use of the instrument is what fails. Standardised assessment earns its place when the data changes a decision a Guide makes the following week. If it does not, the school has bought a folder of PDFs.
What MAP actually does, in Montessori vocabulary
MAP Growth is a computer-adaptive test that places a child somewhere on a scale of vertically aligned goal areas: reading, language usage, mathematics, and science when the school subscribes to it. The score is a RIT number, the percentile is relative to a national norm, and the growth projection compares a child's autumn-to-spring movement against the average for similar starting points. The test takes about an hour per goal area and runs cleanly on an iPad outside the work cycle.
What MAP does well in a Montessori school:
- It places a child on a continuous scale across years, so a third-plane child's trajectory is legible in a way letter grades or report cards cannot make it.
- It surfaces silent under-performance the school did not know about, in a goal area that is hard to observe through Work alone.
- It produces a record families and accreditors find familiar, which lowers a particular kind of friction at admissions, at re-enrolment, and in any conversation with a regulator.
What MAP does not do, regardless of the school using it:
- It does not tell a Guide what to present next. A goal-area score is not a materials-arc position.
- It does not see the difference between a child who has had the Squaring Chains and a child who has not, even if the test result is identical for both.
- It does not know whether a child is in a sensitive period or a stuck week, and it cannot read an observation log.
The first list is what makes MAP useful in a Montessori school. The second list is what schools quietly hope MAP will do, and what it cannot. Most disappointment with MAP in Montessori schools is the gap between those two lists, named in a parent conference for the first time.
Where the data goes wrong
Most schools that use MAP and Montessori together end up with two parallel records that do not speak to each other. One is the records-keeping tool, whether Transparent Classroom, Montessori Compass, or a disciplined spreadsheet, that holds what each child has had presented and where they are in the materials arc. The other is the MAP report, which lives in a folder and produces a percentile band twice a year. The Guide reads both. The Guide is the interpreter. The interpretation is a cognitive act that has to happen for every child, every cycle, in the head of the person who knows them best, and it has to happen between two cycles when the rest of the work cycle is also running.
This is where most schools lose the value. Not because Guides cannot interpret. Guides interpret well. They lose it because translation at scale, with no surface that holds the translation, is not work that survives the week.
A Guide who reads a MAP report on Wednesday and decides on Wednesday evening that a child's mathematics percentile slipped because the child has been spending the work cycle on geography for a month is doing exactly what the data is supposed to make possible. The decision is right. The decision dies on Friday because there is nowhere for it to live, and by the next cycle the same translation has to happen again from scratch. Multiply that by 24 children and three Guides, and the shape of the problem is clear: it is not the data. It is the absence of a place for the inference to land.
The Montessori-specific failure mode
MAP fails in Montessori schools in a way it does not fail in district-buying schools, and naming the failure mode honestly is the first step.
In a district school, MAP results feed a textbook-bound program. The score routes the child to a unit, the unit is the next thing the child does on a screen, and the loop closes. The translation problem is solved by the curriculum being homogeneous enough that "below grade-level in fractions" maps onto "the fractions unit". The translation does not fall to the adult in the room; the program does it. The cost is that the child's morning is structured around a screen and a normed sequence rather than a Guide and a materials arc, but the loop closes.
In a Montessori school, the loop does not close. There is no "fractions unit". There is the Multiplication Bead Bar, the Stamp Game, the Decimal System, the Test Tubes, the Decanomial, the Squaring Chains, the Cubing Material, a sequence of materials with an order and a logic that AMI training internalises into the Guide and that no commercial assessment is mapped to. So the data lands without a translation surface. A percentile arrives, an inference must be drawn, a presentation must be selected, and all of that work falls to the Guide between cycles, with no record of how the inference was made.
This is why the same percentile changes nothing in most schools. The translation is too expensive to do every week, in every Guide's head, with no place for the result to live, and so over time the cost is paid silently: the data still lands, the report still circulates, and Monday morning still looks like the Monday before.
What changes when a Guide leaves
Half of what makes MAP data inert in Montessori schools is the translation problem above. The other half is a continuity problem we have written about elsewhere: the predictive understanding that lets a Guide read a MAP score in the context of the child stays in the Guide's head, and the moment the Guide rotates out, the school is reading the next cycle's results without the previous cycle's interpretation. The percentile is the same. The capacity to act on it is gone.
A school that has not solved the continuity problem cannot solve the MAP problem either. The data and the interpretation have to live somewhere outside any one Guide before either is worth the test fee.
Three honest decisions a head of school can make
Schools that get value from MAP without losing the pedagogy tend to have made three decisions, deliberately, and written them down where the next head of school can find them.
Decide what MAP is allowed to change, and what it is not. A school that uses MAP without a written policy ends up with the data quietly influencing decisions it was never meant to inform. Is MAP a flag-raiser the Guide investigates, or a placement instrument? Does a low percentile prompt a presentation review, a Practical Life conversation with the family, an SEN screening, or nothing at all? Does a high percentile change the recommendation track for the following cycle, or only the parent-conference framing? The answers are not universal; what matters is that the school answers them, and writes them down. A one-page policy makes the data legible to a Guide who arrives in March, and to a board member who asks at the September meeting why the school is testing at all.
Pick a cadence the school can actually run. Twice a year is enough, if the school treats each cycle as a moment of work, not a moment of measurement. The work between cycles is the part that matters: a deliberate session per Guide, in the week the results land, where each child's percentile is read against the materials arc and at least one decision is recorded. If a school cannot afford that session, the school cannot afford MAP. Buying the test without the session is the most common form of evaluation theatre, and the cost is paid most heavily by the children whose results most needed reading.
Walk a child through the data with a Guide before you walk a parent. Most schools learn this the hard way. A parent conference framed by a percentile band, and not by the materials arc, lands as judgement on the child rather than as information about the school's planning. Reverse the order. The Guide reads the percentile against the materials arc first, decides what it changes, and only then does the family see the result. This protects the child, the parent, and the Guide, and it keeps MAP in its proper place: an input to a planning conversation, not the conversation itself.
None of these three decisions require a software change. They require the school to be honest about what the data is for.
Where the problem really gets solved
The three decisions above shrink the gap. They do not close it. The gap closes when the translation from a MAP goal-area score to a specific next presentation in the materials arc is held outside any one Guide's head, by design, in a system that knows both sides of the translation. A system that reads MAP data per goal area, places it against where the child stands in the materials arc, and surfaces a recommendation the Guide can confirm, dismiss, or annotate. That recommendation is not the test. It is the test made usable.
That is the problem we are working on. We will say more about it in the coming weeks.
For now, the question for any head of school running MAP is the simplest one. What did the autumn cycle change in your planning? If the answer is on a page somewhere, the school is using MAP as it is meant to be used. If the answer is in someone's head, the cost of that head leaving is also the cost of the data.
A school that closes the loop between MAP and Monday morning earns the test. A school that does not, prints reports.
Montessori Mind is the school operating system built only for Montessori. Born in a Marbella school, designed against the work cycle you already run. Early access opens in stages.