Recently, a piece of news has gone viral. GeekPark—a Chinese technology media—conducted a test involving 9 large language models (OpenAI's GPT-4o, ByteDance's Doubao, Baidu's Wenxin 4.0, Baichuan's Baixiaoying, Alibaba's Tongyi Qianwen, Moonshot's Kimi, Tencent's Yuanbao and MiniMax) on the 2024 Chinese college entrance exam. The result was that 4 large language models achieved scores above the first-tier university admission line in liberal arts, in which the best overall performance was GPT-4o (562 points), and the best domestic model was ByteDance's Doubao (542.5 points). Compared to liberal arts, the highest science score was only 478.5 points, and all math tests failed, with the highest score being just 70.

If you have relatives or friends taking the college entrance exam, you know that getting above the first-tier university admission line is not easy. So when I saw this news, a deep doubt ripped my mind. Ten years of hard study results underperform than AI, what is the significance of learning?

AI's abilities have already surpassed the average human level in many aspects

GeekPark did not disclose the prompts they used, but judging from the results, the credibility is quite high, especially in liberal arts, such as Chinese, English and History. For AI, it's not surprising with this achievement, after all, it has been trained on almost all of available internet public data. However, the performance in mathematics is still not very good, which is entirely consistent with the current level of AI capabilities.

But if we compare it to last year's AI capabilities, the progress is enormous. Last year, the strongest AI was GPT-4, but it did not have multimodal capabilities, meaning it couldn't solve questions relying on images or sounds. Last year, the highest level of domestic large models was equivalent to GPT-3.5, but this year, models like Doubao have already approached the capabilities of GPT-4o.

OpenAI's CTO Mira Murati mentioned in a recent interview:

If you look at the trajectory of improvement, systems like GPT-3 were maybe toddler-level intelligence. And then systems like GPT-4 are more like smart high-schooler intelligence. And then, in the next couple of years, we're looking at PhD intelligence for specific tasks.

It is foreseeable that in the near future, AI's college entrance exam scores will reach the key university admission line, and eventually become the top scorer. Just like in Go, now only humans beating AI can become news.

We should learn from AI how to learn

Looking back at the iconic "man-versus-machine" showdown in the Go community, Google's AlphaGo emerged victorious against the world's top Go player. Human champions humbly lowered their heads and began learning Go strategies from AI. Nowadays, AI-assisted training is extensively used in the Go community.

The capabilities of AI are advancing at an astonishing pace. I believe that many methods and principles of machine learning are highly valuable for us to learn from. I have summarized them into 5 key points.

1. Objective goes first

In all machine learning, an objective function is essential. The same principle applies to human learning; setting a goal is the first step before starting to learn.

The objective function, also known as the cost function or loss function, is crucial. The aim of model training is to minimize the loss. Similarly, human learning revolves around consistently striving towards one's learning objectives.

2. Learning from examples

Anyone familiar with LLMs like ChatGPT knows that when a LLM gives a poor output, we can guide it by providing one or more examples. This technique is known as One-Shot Learning or Few-Shot Learning.

Modern AI has abandoned rule-based learning for a long time, opting instead to learn from vast datasets and cases to uncover underlying patterns and principles. Rule-based learning is like memorizing formulas by rote, while case-based learning involves deriving rules and methods from numerous examples. Clearly, the latter approach offers superior generalization capabilities and hardly to be forgotten.

3. Learning with high-quality information

"Garbage in, garbage out." is a fundamental principle in machine learning. If the input data is of poor quality, the model's performance will inevitably be subpar. That's why GPT's training datasets are meticulously curated from high-quality sources, including Wikipedia, books, academic papers, code, and premium internet texts.

There is an English proverb that says, "If you lie down with dogs, you will get up with fleas." Similarly, human learning should rely on high-quality information. Today, we live in an era of information explosion, where we are overwhelmed with information and resources rather than lacking them. It's even more necessary to extract the valuable content while discarding the trivial. Only through high-quality articles, books, videos, courses, websites, and tools can we efficiently distill genuine knowledge.

4. Learning from errors

In the training process of large languege model, data flows through the various layers of the neural network, eventually generating the next token. The discrepancy between the expected and actual token values is known as the model's error. LLM learns from this error by backpropagating it through the neural network and using techniques like gradient descent to adjust model parameters, thereby reducing loss and moving closer to the objective function. LLM continuously repeats this process, gradually getting closer to the objective function.

Errors provide the most effective feedback. However, humans tend to seek rewards and avoid punishment. The environment in which we grow up leads us to subconsciously view errors as negative. Consequently, we often hide our mistakes to evade potential punishment, unknowingly missing out on valuable learning opportunities.

We need to clearly recognize this cognitive trap. Don't fear making mistakes during learning. Although errors can be painful, they are the essential source of learning.

5. Iterative mindset

Learning is not something that happens overnight; don't expect to reach the top with just a bit of effort. ChatGPT didn't suddenly astonish the world upon its launch; it spent 5 years and $2 billion, progressing through iterations of GPT-1.0, GPT-2.0, and GPT-3.0, finally achieving remarkable results with GPT-3.5.

Learning is a process of incremental improvement. We should adopt an iterative mindset towards learning. Every small step forward is worth celebrating. Avoid the trap of seeking instant success and giving up when progress seems slow.

Remember, large models never give up; they are constantly improving. Learning has no end point, and we must commit to lifelong learning.

The significance of learning lies not in exam scores, but in solving open problems and creating new things

AI is indeed powerful, but its capability often resembles parrot-like mimicry. It cannot independently leverage these abilities to solve new problems or create new things. In other words, AI understands the "what" but not the "why." Because of lacking of causal understanding, AI frequently lead to hallucinations.

This is precisely the significance of learning. Learning isn't just about achieving high scores; exam questions have standard answers, and with enough background knowledge, anyone can answer them correctly. When it comes to knowledge reserves, no one can match large languege models. However, our lives are not a standardized test; they are filled with unknowns and challenges. The purpose of learning is to build our cognitive frameworks and use existing knowledge to solve open problems in life and work or to create new things. In this process, AI can serve as a powerful assistant, quickly helping to level many skill gaps, allowing us to focus on inherently human capabilities such as empathy, morality, aesthetics, imagination, and creativity...

In summary, the significance of learning is mastering the art of learning itself and using it to construct our thoughts and enrich our cognition. If, one day, a war between humans and AI erupts, as depicted in many science fictions, our only weapons to prevail over AI will be love and thought.