So far, AI progress is driven by the amount of data and compute during training
Richard Ngo argues that progress in image generation recently was partly achieved because of the development of new architectures and algorithms, e.g. GANs, transformers and diffusion models. Nevertheless, most progress was driven by scaling relatively simple algorithms, using more compute and data, as illustrated by above illustration (better performance with increasing parameter amount in Google’s Parti model).
I think that this is a general pattern, where we are mostly relying on more compute and data, rather than better algorithms. But I am confused by this, didn’t improvements in algorithmic efficiency outpace hardware advances by great lengths?
Source: Ngo, R. (2023) ‘Visualizing the deep learning revolution’, Medium, 18 January. Available at: https://medium.com/@richardcngo/visualizing-the-deep-learning-revolution-722098eb9c5 (Accessed: 9 February 2023).