Doing data better in schools

Doing data better in schools

Author:

Doing data better in schools

Joshua Perry knows data and what it can do for the education system. EdSmart gains his insights on digital maturity, and how effectively schools are working with data compared to other industries.

Digital maturity is arguably the single most pressing issue that schools are dealing with today. Because it’s not just about advancing school systems; it’s about creating efficiencies that address staff workload and other issues restricting the sector.

That’s a statement that is supported by Joshua Perry, co-founder of UK EdTech businesses Carousel Learning and Smartgrade.

With schools filled with smart people who understand that technology and data can help them, he believes there are a number of elements the education sector is fighting to overcome to accelerate digital transformation. The biggest hurdle? Funding.

“It’s hard to find money in a school budget to invest,” Joshua Perry begins. “For example, I arrange dinners for Multi-Academy Trusts (MATs) around the UK, and a very common thing that comes up is MATs wanting to have better data warehousing. But it’s quite a complex and often a custom thing to commission.”

He’s quick to point out this doesn’t mean schools and MATs “are inherently backward”. Far from it. Instead, he says, “I actually think there is an awful lot of data analysis that happens in schools without effective technology. You’d be amazed at the quality of some of the spreadsheets people have set up to manage themselves. They know the problems they want to solve, but it’s the lack of budget that holds them back.”

Scale, integrity and reliability

There are a number of challenges that cloud the waters when it comes to assessing how well schools utilise their data compared to other industries, particularly the scale, integrity and reliability of the data.

According to Joshua, schools are often metaphorically “drowning in data” and it’s not unusual for that data to be poor in quality, certainly in the UK.

When talking school data with Joshua, he’s referring to three categories. The first category is non-academic data like finance information. The second is data that has an impact on academia but isn’t assessed, such as attendance. The third, and what Joshua sees as the most important, is assessment data.

As he explains, “The thing about assessment data is two schools could have the same spreadsheet or system for managing assessment data, but one could put in high-quality, reliable data that aligns with their curriculum, and the other could put in data that’s not great quality and prone to the cognitive biases of teachers because they’re using teacher judgments.”

“So you can have two systems that externally look identical, but one is actually helping to drive outcomes and the other is, if anything, worse than nothing because it’s not actually based on valid or reliable information. I think that’s the issue I see in schools but, with Smartgrade and Carousel, it’s something I’m trying to tackle.”

Scale is the other issue that affects how well schools use their data compared to other industries: “When you talk about schools generally, it’s very hard to compare a large London secondary, which potentially has a £10 million-plus budget, with a village primary, which will have a budget of a few hundred thousand. They’re just so different as institutions that the capacity they have to analyse data is different too.”

Shining examples and cautionary tales

Edtech is an area that often requires specialist knowledge. Despite the time and effort schools and Trusts put into getting it ‘right’, there are some mistakes Joshua has seen made time and time again, despite everyone’s best efforts.

The most common error relates to the introduction of the UK Government’s grading system in 2016, when it created a set of national primary assessments, changing the grading system to a scaled score.

“But actually,” he explains, “that grading system—where 100 is the expected standard nationally—looks very similar to what you would traditionally in education call a standardised score, or previously, a scaled score, which is designed completely differently… Standardised scores in education look very similar, with 100 typically being the mean average, and two standard deviations on either side getting you a range of 70 to 130.”

“So what you have is a lot of primary schools in the UK confused between those two things. They buy a standardised assessment from an external vendor, and it gives them a standardised score, where they get a score of 100 — which is the average, because that’s what the assessment vendor has designed that assessment to do — and they think it’s the scaled score, which is the national expected standard.”

He says this is a common mistake, which can be avoided. “It’s just confusing standardised schools in a traditional academic sense with scaled schools. Those kinds of mistakes are really maddening and saddening because you’ve got people looking at data and thinking they’re doing a useful thing, but they’re really misleading. You see lots of that.”

Alternatively, there are instances Joshua’s seen where schools and Trusts are using data in really innovative ways. He cites two examples:

“In the world of Smartgrade, a thing that we help Multi-Academy Trusts with is how they standardise. A very common problem, particularly at secondary [schools], is you want to create reliable assessments across, let’s say, 10 or 20 secondary schools, but you can’t buy an external assessment. You’ve somewhat standardised your curriculum, but schools have traditionally often written their own assessments. What Smartgrade helps schools do where they’ve standardised their curriculum — or at least have a set of common standards for the end of the year — is use their own schools within their Multi-Academy Trust as the standardisation sample. You set and share assessments across a Trust and then we’ll give you back a percentile rank and some other things for each student based on their position within that Trust”.

“I think the second thing we’re seeing more of, and this is what Carousel does, is thinking in terms of question banks – not quizzes – for assessments,” he continues. “One of the problems is, if you’re teaching and you want to set quizzes or tests for your students, you design those assessments either on the hop because it’s a Sunday night, you’ve got to write an assessment for your kids to do tomorrow, and it’s a 10-question assessment on vocab or whatever that is. That has a value, but it’s very hard to be systematic across a year, or even across multiple years.”

“What we’re seeing the more sophisticated Trusts do is start the year by saying, ‘Here’s my question bank, and what my technology is going to help me do is not just ask all of those questions once, but re-ask those questions’. Because the more you ask a question, the more it helps that knowledge embed in a student’s long-term memory. What the technology’s going to do is help you ask the best question at the best time — which questions haven’t you asked for a long time? That kind of thing.”

Where to next?

With all things being equal, we ask Joshua where he sees the analysis of data progressing in terms of school governance and compliance in an ideal world. Without hesitating, he returns to the importance of the quality; not so much the quantity of data that schools and Trusts have at their disposal.

“Governors look at a lot of data, and arguably too much data,” he observes. “And, as I said earlier, I don’t think schools need more data, I think they need better quality data. I think people in the sector like me can help schools with that. Companies like the ones I’m a part of can help with that.”

The one thing he’d like to see more of in the UK system, specifically with relation to data, is adaptive assessments: “I don’t know if you have the same thing in Australia but, in the UK, tiered assessments are very common at the secondary [school level], where, for a student, you decide whether you’re putting them in for the lower paper – what we call the foundation paper – or the higher paper for a national assessment.”

“That’s a really high stakes decision,” he goes on to say. “If you’re putting a student in for a foundation paper, you’re saying you can only get a certain mark, you can’t get above that. If you’re putting in for the higher paper, you’re saying you need to perform to at least this level because, if you don’t, you get an ungraded response. So some students get put in for a higher paper and end up getting nothing, because their teacher shot high, but they didn’t make that standard.

The hand-in-glove partnership of technology and data, he says, needs to be better utilised to deliver better outcomes for teachers and students: “It’s not quite the same across all subjects but, these days, you can design adaptive assessments where technology delivers you the next question based on how a student did on the proceeding questions, and the paper adapts as you go along to an ability level.”

“I think one way we can have better quality data is by removing things like where the system’s been designed to produce unfair outcomes for certain students because they’ve been entered into the wrong assessment.”

“Measuring progress is its own incredibly complex question,” Joshua concludes. “There are ways you can do it that are meaningful. There are also ways you can do it that almost skew the data or contort the data unhelpfully, and it would take a whole other interview to unpack that.”

Measuring progress is worthwhile and important, but Joshua has noted a tendency towards obsession with finding small gradings of progress, and then designing data scales that are impossible to use or impossible to implement.

“There’s a false accuracy assumed of them that actually makes things bad or worse.”

Read more about what Joshua Perry has to say concerning data in EdTech at his Bring More Data blog.

Copyright © 2023 EdSmart – Cloud Paper Group Pty Ltd ABN 55 169 666 317. All rights reserved. The EdSmart logo is a registered Trade Mark of Cloud Paper Group Pty Ltd.

  • Product