The brief history of artificial intelligence: The world has changed fast – what might be next? – Our World in Data

Npressfetimg 2447.png

The AI systems that we just considered are the result of decades of steady advances in AI technology. 

The big chart below brings this history over the last eight decades into perspective. It is based on the dataset produced by Jaime Sevilla and colleagues.7

Each small circle in this chart represents one AI system. The circle’s position on the horizontal axis indicates when the AI system was built, and its position on the vertical axis shows the amount of computation that was used to train the particular AI system.

Training computation is measured in floating point operations, or FLOP for short. One FLOP is equivalent to one addition, subtraction, multiplication, or division of two decimal numbers. 

All AI systems that rely on machine learning need to be trained, and in these systems training computation is one of the three fundamental factors that are driving the capabilities of the system. The other two factors are the algorithms and the input data used for the training. The visualization shows that as training computation has increased, AI systems have become more and more powerful.

The timeline goes back to the 1940s, the very beginning of electronic computers. The first shown AI system is ‘Theseus’, Claude Shannon’s robotic mouse from 1950 that I mentioned at the beginning. Towards the other end of the timeline you find AI systems like DALL-E and PaLM, whose abilities to produce photorealistic images and interpret and generate language we have just seen. They are among the AI systems that used the largest amount of training computation to date.

The training computation is plotted on a logarithmic scale, so that from each grid-line to the next it shows a 100-fold increase. This long-run perspective shows a continuous increase. For the first six decades, training computation increased in line with Moore’s Law, doubling roughly every 20 months. Since about 2010 this exponential growth has sped up further, to a doubling time of just about 6 months. That is an astonishingly fast rate of growth.8

The fast doubling times have accrued to large increases. PaLM’s training computation was 2.5 billion petaFLOP, more than 5 million times larger than that of AlexNet, the AI with the largest training computation just 10 years earlier.9 

Scale-up was already exponential and has sped up substantially over the past decade. What can we learn from this historical development for the future of AI?

Source: https://news.google.com/__i/rss/rd/articles/CBMiLmh0dHBzOi8vb3Vyd29ybGRpbmRhdGEub3JnL2JyaWVmLWhpc3Rvcnktb2YtYWnSAQA?oc=5

Leave a comment

Your email address will not be published. Required fields are marked *