Will Artificial Intelligence replace human intellect? | News And Newsletters | Pocklington School | Day & Boarding School In East Yorkshire

From the Head's Study

HeadshotWill Artificial Intelligence replace human intellect?

This is my first opportunity as the new Head of the Pocklington School Foundation to offer you a “thought piece”. At the start of term, I recently addressed the School on something that I feel is an important issue for them to think about as young people, which is something that I want to do on a semi-regular basis going forwards. To offer the community thoughts on important issues and hope you will indulge me.

Today, the thoughts I have considered, is whether Artificial Intelligence will replace human intellect in years to come.

It cannot be denied that Artificial Intelligence has played a major role in the digitalisation of society,  enabling us to collect, process and analyse large amounts of data at faster and faster rates. It has led to new technologies being created, improved business processes, and greater efficiency all round.

However, there is a fundamental problem at the heart of the ethical life lived by humans. We are blessed with the capacity to make conscious, ethical choices in conditions of fundamental uncertainty. It is our lot to be faced with genuine ethical dilemmas in which there is, in principle, no “right” answer. Values such as truth and compassion can be held with equal weight and pull us in opposite directions. We know what it means to make a responsible decision, even in the face of radical uncertainty, and we do it all the time. Can Artificial Intelligence be trained to think with a humanist approach? Can it think critically, deeply?


The issue with a large language model is that they have no actual "intelligence"

As you would be well aware, back in November 2022, the generative AI tool ChatGPT was released by OpenAI to much fanfare. It was a new chat-style interface to OpenAI’s flagship Large Language Model series, and is compelling, easy-to-use and strikingly lifelike in the quality of interaction possible. There have been regular updates since then, and a wide range of other similar models released. It is important to understand that in essence, what a large language model does is to take an initial input and then serially predict the next possible word based on the initial input and the program’s own subsequent additions to it. A bit like taking the idea of predictive text and ramping it up to a whole other level based on a gigantic data set.

Given how these models are constructed, a number of issues have been identified with the potential status of these tools as “Artificial Intelligence” which you have no doubt heard discussed. The issues include their consideration of ethics, being prone to “hallucinations” (statements that are plausible at a surface level but factually incorrect), representations of bias that existed in their training data (and considering they were trained on the material contained in the internet, caution is certainly advised), deep fakes and predictive policing, as well as their inability to engage in deep learning or critical thinking in any real way. The issue with a Large Language model is that they have no actual “intelligence” at all: despite the fluency of their outputs, there is no grounded understanding and lived experience and they are not “thinking” in any humanly recognisable sense. Large Language models have no actual conception of what is real and what is not.

We are blessed with the capacity to make conscious, ethical choices in conditions of fundamental uncertainty... We know what it means to make a responsible decision, even in the face of radical uncertainty, and we do it all the time. Can Artificial Intelligence be trained to think with a humanist approach? Can it think critically, deeply?

A business person using a calculatorPage Image,A

Now arguably, it is being deeply ironic and implanting a humorous response, but I think it is more likely that it has just missed the point! It is well established that Generative AI is very bad at working with imagery and metaphor… so how good can it really be at writing high quality English essays, for instance?

From my own subject area (mathematics), AI can be less than helpful, when one considers the context of a question beyond the “just do this”.

I typed in “Four cars leave Pocklington together and arrive in York 30 minutes later. Explain briefly how long it would take for eight cars.” The response it gave me was “If four cars take 30 mins to travel from Pocklington to York then it would take eight cars twice as long, or an hour to travel the same distance. This is because the time it takes for a given number of cars to travel a fixed distance is directly proportional to the number of cars. Therefore, if the number of cars is doubled, the time it takes to travel the same distance will also be doubled.”

Whilst I have a little sympathy with that from the traffic perspective, it is certainly not something to be relied upon. What it has done here is to take a frame of reference of how similarly worded problems are solved, but has failed to understand the context properly, which with our lived experience, is self-evident. It makes similar errors on similar problems, such as an orchestra with more players taking longer to play the same piece of music.

Is it inevitable that in a number of years’ time, we will rely heavily on it? Probably, yes. We are already moving that way. But is that what we want? What will the impact be on society?


Are large language models to English what the calculator is to Maths?

You may have heard the comparisons with “Plato’s Argument Against Writing”. The story goes that a great Egyptian king told the inventor of writing that if men learn to write it will “implant forgetfulness in their souls, ceasing to exercise memory because they will rely on that which is written”. But we write every day, so clearly the king was wrong. Later, the printing press was espoused as the end of education, as who needs teachers if you can read a printed book and don’t need to write? But as teachers we are still employable, so clearly that was wrong. Then we were told that the internet will change everything. Who needs to learn knowledge if you can just google it? But google is unreliable so we still need to be educated, so clearly that was wrong. Is this just the next frontier, and it will be absorbed as a useful tool, but have no real impact?

Some people have commented that large language models are to English what the calculator is to maths. The calculator is a very particular and reliable piece of technology. It does its limited job very well. It is a machine for a particular purpose. However, it doesn’t actually do a lot of what you need to do in maths at all. It can’t prove things, only calculate, and has a very limited use if you don’t know exactly what you need to type into it. But a large language model goes further than just being a tool, it invents new thoughts and concepts (hallucinations being just one example of this), and reinterprets things without putting it through any ethical judgement process at this stage in its development. For example, you can ask AI to do any number of things that you might do in a workplace: Write a paper, a grant proposal, etc. However, only you can determine if what it suggests is the direction you wanted to take, to consider the ethical or moralistic side of a business plan in your context.

We should encourage our young people to see it more in the manner of a calculator. To perform certain tasks to assist you with your thinking, but in no manner to replace your thinking.

Thinking StatueOn a more immediate consideration in a school context, if pupils do slide into the slippery slope of using AI to replace their own thinking, they are doing themselves a huge disservice. They are depriving themselves of the very learning they are at school to benefit from. Tasks set in all subjects both for completion in school and at home are designed as part of a learning program for them to work through to gain the skills, knowledge and understanding they need to succeed in that subject. If a teacher sets a task of writing a report about the photosynthesis process in biology, it is not because what they really want is to read 18 reports on the photosynthesis process! No, it’s because they want pupils to engage with HOW to write a report on that topic, how to research it, how to structure it, on what the photosynthesis process is and how it can best be explained, and so on. They want them to learn. Not just about photosynthesis, but about how to write a report and explain themselves clearly and concisely. They want them to be able to judge what is good and what is not great. To not just accept what a machine churns out at them as the best possible output. In the future, jobs may well involve producing things with AI, but if our young people don’t learn how to critique, how to recognise an excellent product from an “OK” product, or more importantly, what they need to do to turn the “OK” product into an excellent product, then they won’t be anywhere near as successful as they could be.

There are some purposes for which AI could be useful, just not for taking lazy short cuts. It could be used to assist with the collation of information, it could be asked to question on a topic to assist revision, the responses it produces could be critiqued in a critical thinking activity, but the deep knowledge needs to be there and of your own making. We should encourage our young people to see it more in the manner of a calculator. To perform certain tasks to assist you with your thinking, but in no manner to replace your thinking.

Our young people need to take the time to think through how to write that essay, that science report in order to learn how to do it better each time and improve. To recognise the difference between good and great. To be discerning with information that they come across. And that will make them… and the society they live in… great. What we should aim for is that we all put the hard yards in to ensure that Artificial Intelligence does not replace human intellect, but only enhances it.

Discover more at
Pocklington