Interview

November 28, 2018

Professor Rose Luckin: how AI in education should work

The Institute of Education prof wants to connect innovators, educators, academics and ethics.


Layli Foroudi

8 min read

When Professor Rose Luckin was looking to change careers from teaching in 1990 after having kids, she chose an AI and computing course. It was a rogue choice — her family thought she was going to study “artificial insemination”. Today, she spends a lot of time trying to join the dots across the edtech industry, figuring out how to connect academia and startups, and how to use AI  in education – properly and ethically.

The edtech industry has come a long way. From the cottage industry that it was when Luckin was doing her PhD, it has entered the mainstream — with 850–1000 education startups in the UK alone; and tech giants like YouTube and Amazon muscling in too.

To join the dots, Luckin has set up EDUCATE, a startup clinic based at the UCL Institute of Education, to connect entrepreneurs with educators and with research — what she calls “the golden triangle”. Luckin’s latest initiative is centred on ethics — in October 2018, she co-launched the UK’s Institute of Ethical AI in Education.

Advertisement

In between her meetings with schoolteachers and startup founders in North London, Sifted caught up with Professor Luckin about her enthusiasm for, and fear of, tech’s potential to revolutionise education.

From 1990 when you started studying AI until now, what would you say are the main shifts in the way that the industry works?

With more powerful technologies, we’ve been able to do much more sophisticated things. But the biggest change is that now you have huge companies that are interested in this space. When I was a PhD, I built my own piece of software, I had to do everything. So it was like a little cottage industry of us producing these things. I think the connection between the academic community and industrial community is only just starting to get going because a lot of the work that we’ve done in the academic community is largely unknown by the companies who are developing commercial products — and that’s fault on both sides. So creating those connections is really important.

You have been running EDUCATE — a programme, based within the university, supporting ed tech startups in the UK. Is the purpose of that to create some of those connections?

We want to connect the people who build the technology, the people who understand if its working, and the people who use it. We offer training to people that are developing technology to help develop a “research mindset” — to want to know whether their product or service is working, and to want to know how to evidence that. Many companies when they start this, they only think of research as market research.

One thing that is universally a concern is the lack of capacity in educational professionals to integrate technology. That is a particular issue when it comes to AI technology. We need educators who understand what training an algorithm is, what it can do, what it can’t do. When they should question a decision that an AI system has made. What really worries me is that I don’t see anyone focusing on educating the educators. Because if they can’t do that how do they educate the students to be able to do that?

What about your understanding of education? Do we have a common definition or understanding of what education is?

Currently we define education very much as the pursuit of knowledge, and one of the big difficulties we’re facing at the moment, and one of the reason the [UK’s] education select committee has an enquiry on the fourth industrial revolution, is that there is a lot of evidence that suggests that the nature of the skills, knowledge, expertise, that we need people to have, is changing very rapidly, because the workplace is changing. We’re realising that education is not what we need it to be.

That’s precipitated by artificial intelligence. Because once you build machines that can learn, it is possible to build an AI systems that would ace a lot of the things that we currently measure our education systems by and once you do that, you have to start thinking differently about what we measure. It is not intelligent to focus on developing within our humans, the same things that we develop in our AI. We treasure what we measure, and what we are measuring is what we’ve automated.

Now your most recent focus has become ethics in AI. What prompted the launch of the Institute of Ethical AI in Education?

If you look at the formal education system in the UK, schools have to adhere to an inspectorate or Ofsted. Universities and further education — the same thing. Set up an education business online — there’s no regulation. Other than data and business regulation. But you don’t need educational qualifications. So what we’re trying to do is look at what principles and what framework could be put in place that might help to protect people. We need some regulation and we’ve set up the institute because we don’t know what that regulation is. Is it like a utility, or is it like a financial institution, or is it like a drug, or is it like barristers who self regulate?

Big players in the tech field are starting to realise that there is this huge potential application area for their AI or for their technology and so I worry that under the guise of learning and education, bad things can happen because we’ve got a whole set of organisations that are interested in education that were never interested before.

The same month that your Institute launched — YouTube announced a focus on education with their new YouTube for Learning platform. Is interest from YouTube and big companies a good thing?

YouTube learning is a great example. How do we know that any of the things that YouTube learning will produce have any educational benefit — we don’t. As I understand it, YouTube learning are funding star educators to produce material — these are people who have proven that they have educational value. But then, how can we be sure that other people who are less scrupulous and less known will also produce content on YouTube. Because in the past we’ve have inappropriate material appear on YouTube.

And then in general, you’ve got the potential for much more sinister activity. How could we be sure that people aren’t being taught things we might not want them to know? How can be sure that people’s beliefs aren’t being very subtly manipulated in ways that they don’t realise. We know, there is lots of empirical evidence to demonstrate that we’re not very good at telling when our beliefs have been changed. In a way, education is all about changing people’s beliefs. Helping them to learn but there is a negative side to that. How do we know there is not some really harmful content going round, some subliminal advertising going on. If you are a company that is all about return to your shareholders, what’s to stop you doing that.

Advertisement
You have said that one of the Institute’s tasks is to make sure that AI doesn’t prioritise some aspects of learning at the expense of others. What do you mean?

We don’t want AI to be used as a sticking plaster to cover something up, we need it to be something that is used to improve the whole system. We don’t want to substitute teachers with an AI tutor and some bouncers to keep things in order. Although it is expensive to develop these AI systems in the first place, once you’ve got them and they can learn, they don’t get tired, they don’t need holidays, they don’t go on strike. We’re short of money and teachers — you can see how it could happen.

We also need to watch for AI being used to prioritise a subset of what we want education to be about. It could be that industry says “we need these skills”. If you just deliver what industry needs you don’t end up with a holistic curriculum. Or, if we have the ability to provide tailored effective tutoring in particular subject areas, we might skew too much toward those areas.

EDUCATE is going to embody these ethical principles laid out by the Institute. So far you’ve supported 130 startups. From this work over the last few years, what are three design faults that spell an edtech fail?
  1. If they’re general purpose they probably aren’t good, like if it’s just a game and it doesn’t differentiate for different ability levels and preferences. In order for these things to be effective you have to provide some kind of differentiation, just like a good teacher does.
  1. If you have something that doesn’t provide good feedback to the learner, more than just that is right and that is wrong. If I get something wrong, how can I get it right next time?
  2. When the tech becomes the focus. Slightly less sophisticated tech but much more sophisticated thinking about the education would be much better. The key is to ask: what is the educational challenge we’re trying to address and what is the technology we could use or could we do better without it?