Sometimes in a generation, innovation comes to life that transforms technology from fluorescent darkness to engineering room basements, the lonely teenage nerd bedroom of hobbyists — into a thing that can be used. The internet came into existence around 1990. In 1994, NetScaler was introduced, which helped many people discover the web. Before the iPod was released, MP3s did not impact the digital music revolution. There was a smartphone before the iPhone ceased being available in 2007, but there was no app before the iPhone.
Let’s dive into, what drove the AI technology renaissance, and the rise of Large language models LLMs.
AI in 1950s
In this stage, people still use primitive models that are based on rules.
AI in 1980s
Since the 1980s, machine learning has started to pick up and was used for classification. The training was conducted on a small range of data.
AI in 1990s – 2000s
Since the 1990s, neural networks have started to imitate human brains for labeling and training. There are generally 3 types:
CNN (Convolutional Neural Network): often used in visual-related tasks.
RNN (Recurrent Neural Network): useful in natural language tasks
GAN (Generative Adversarial Network): comprised of two networks(Generative and Discriminative). This is a generative model that can generate novel images that look alike.
AI in 2017
“Attention is all you need” represents the foundation of generative AI. The transformer model greatly shortens the training time by parallelism.
AI in 2018 – Now
In this stage, due to the major progress of the transformer model, we see various models train on a massive amount of data. Human demonstration becomes the learning content of the model. We’ve seen many AI writers that can write articles, news, technical docs, and even code. This has great commercial value as well and sets off a global whirlwind.