ChatGPT and the World Cup: The Power of Predictive Science and its Shortcomings

Predictive statistics is a branch of statistics that involves using data and statistical models to make predictions about future events. In the context of the World Cup, predictive statistics could be used to analyze data from past World Cup tournaments and other relevant sources to make predictions about the outcomes of future matches.

There are many different statistical techniques that can be used for predictive modeling, including regression analysis, decision trees, and machine learning algorithms. These techniques can be applied to a variety of data sources, including data on team performance, player statistics, and environmental conditions.

Predictive models can be used to make a variety of predictions about the World Cup, including the likelihood of a particular team winning a match, the probability of a match ending in a draw, or the likelihood of a player scoring a goal. These predictions can be used by fans, media outlets, and sports betting companies to inform their decisions and make informed predictions about the outcome of World Cup matches.

It is important to note that predictive modeling is not an exact science and that the accuracy of predictions will depend on the quality and relevance of the data used and the sophistication of the statistical models employed.

The text above was written by ChatGPT. Impressive, isn’t it? If you have absolutely no idea what ChatGPT is, it’s about to become even more impressive. It’s a chatbot by OpenAI that can accurately write an original text about virtually anything in a matter of seconds. The text literally unfolds before your eyes. I tested it myself. I created four very distinct texts. One on second language acquisition, something I’m quite familiar with, another one on something I’ve recently started reading about (consciousness in Buddhism), and two on topics my wife is an expert in: nutrigenetics and precision nutrition. Both my wife and I were impressed. We checked for plagiarism and the text is 100% original and it really seems to be something only a human being could have written. How can that possibly be? If you check ChatGPT’ FAQ section, they tell you that:

These models were trained on vast amounts of data from the internet written by humans, including conversations, so the responses it provides may sound human-like. It is important to keep in mind that this is a direct result of the system’s design (i.e. maximizing the similarity between outputs and the dataset the models were trained on) and that such outputs may be inaccurate, untruthful, and otherwise misleading at times.

This tool raises a number of questions, mainly if you teach, particularly English, or if you’re a freelance writer – I do both. Will this technology replace us or at the very least impact our work in significant ways? Have you noticed how we keep asking ourselves that again and again? I remember people wondering what the future of teaching would be 15 years ago when YouTube became famous. Then it was after Google Translate. Now it’s Artificial Intelligence (AI).

I suppose we can analyze this through the case of the world’s largest current event: the 2022 World Cup in Qatar. Oxford University applied AI to the largest football championship in the world and, based on FIFA ratings, estimated the route that each national team would take until the final match.

No alt text provided for this image

In a recent publication for Medium entitled AI’s World Cup Prediction Tragedy: Beautiful Game is Too Complex for Machines, Erman Akdogan wrote:

One of the main reasons why AI cannot accurately predict the winner of the World Cup is that the game of soccer involves a high level of human decision making and intuition. […] In soccer, a single moment of brilliance or mistake can change the course of a game and ultimately determine the outcome.

Machines, as sophisticated as they are, fail to grasp what I like to call the human factor. AI is based on logic and algorithms are created to work in synchrony with big data says. AI looks for patterns and calculates the best probability of something being the correct answer. For the World Cup, we may say that the Oxford AI got many things right. The major teams are there and it got two of the Quarter Finals right (Netherlands VS Argentina, and France VS England). But when things defied the logical or even natural course of events, it failed. It failed to predict that Morocco would go so far. It failed to see South Korea, Japan, Poland, Senegal, Australia, and the USA move to the Round of 16. Oxford AI failed to predict the final match and, to my sadness as a Brazilian, the winner.

What does that have to do with ChatGPT? A lot actually. The World Cup AI flop fails because of the human factor. I’m rereading a book for a session I’m delivering in a few days that discusses precisely that. It’s French neuroscientist Stanislas Dehaene’s How We Learn: Why Brains Learn Better Than Any Machine…for NowHe claims that while AI requires loads of data to find patterns and learn, we can do it in relatively scarce data situations because of genetic factors and social learning. We’re all born with embedded concepts about the universe, nature, and more specifically our own biology. We don’t need to be taught these things. On the other hand, we can learn things quite efficiently through shared attention and, thus, social interaction.

Perhaps, one of the biggest advantages of the human factor is creativity. Sir Ken Robinson, an idol of mine, put it simply a while back:

Creativity is the process of having original ideas that have value

ChatGPT creates original and valuable texts so we might even call it a creative tool. But it’s limited by a rigid structure. When my wife and I read and analyzed the texts for a few minutes, we noticed that they are basically the type of informative text you might find on Wikipedia. There are no big differences in style or linguistic devices. They follow a recipe, a formula. Many colleagues are worried that this tool will allow students to “cheat” on their assignments and have this technology write essays for them. Others think publishers might use this tool to write the reading section of every unit of their upcoming book. But I think I have to agree with Bruno Albuquerque. He’s recently written a thought-provoking text on his excellent blog differentiating between work and Work (with a capital W) using Hannah Arendt’s definition of trabalho and obra. Bruno says that ChatGPT might impact the less creative and repetitive type of work but it will likely not affect the creative type of Work inherent to our unique human faculties.

I myself recently published an article for Modern English Teacher (MET) discussing the human factor in teaching and learning. The COVID-19 pandemic, despite all the suffering it brought us, gave academics a unique opportunity to check the impact of technology on learning outcomes. Many tech enthusiasts and gurus thought this period would bring about a revolution in teaching and speed up EdTech integration at a never-before-seen scale. They said students would become more autonomous and self-efficacious and that teachers could create asynchronous materials and upscale their reach. They were wrong, particularly about young learners.

Teaching and learning suffered a great deal during the pandemic precisely because it weakened the human factor. The physical presence of a teacher in the classroom makes a huge difference. The brilliance of a teacher making a decision that might defy logic can be the difference between failure and success for that one student who is struggling. Messi’s brilliance against Croatia made a difference Oxford AI did not predict. And, as a Brazilian, I’m sure it would have made a similar difference if the opponent had been Brazil.

But I might be wrong… See, Brazil VS Argentina is a South American derby match and it’s hard to predict the outcomes. Because the human factor is always interfering and it’s unstable, it defies logic, and it’s complex. Playing against rivals with a group of supporters cheering for you can trigger many things, boost morale, increase motivation and make the impossible possible. So can teachers in the classroom.

Science is an amazing tool that can help us understand different phenomena and predict certain outcomes. It uses statistics to tell us the likelihood of things happening in any area of human endeavor. But it is not absolute or invulnerable to failure. Good science and statistics do precisely what I mentioned before: they tell us the probability of a given outcome and they follow a certain logic based on the available data. Great science uses phrases as the evidence suggests or this might mean that and end in more research is necessary to...

That said, ChatGPT can be a wonderful tool for those who struggle to write descriptive/informative texts. Students might even use it to compare their text to a model that follows a premade structure. They may also use it to cheat as I mentioned before. But a truly unique and creative text still relies on the presence of a human being on the other end of a screen. I think my message here to those concerned with ChatGPT can be the conclusion I reached in my MET article. Any technology that fails to maintain or at least simulate the human factor effectively will likely never replace us, human beings, teachers in flesh and blood. They will simply become another tool in our repertoire and hopefully help us, real people, do the Work more effectively.

Time to watch the final match of the World Cup. Even though I agree with Stanislas Dehaene and find his work at College de France fascinating, today I’ll be rooting for my South American hermanos who were able to defeat the Oxford AI machine.

Go, Argentina!

1 thought on “ChatGPT and the World Cup: The Power of Predictive Science and its Shortcomings”

  1. Pingback: Generative AI and ELT Materials | Adaptive Learning in ELT

Leave a Reply

%d bloggers like this: