My subjective notes on the state of AI at the end of 2024

4 points by tiendil 2 days ago | 4 comments

The AI landscape continues to evolve rapidly. At the end of 2024 I take time to reflect on the current state of AI and make some predictions about the future. The results are this series of four posts that I want to share with you:

1. Industry Transparency: https://tiendil.org/en/posts/ai-notes-2024-industry-transparency

2. Generative Knowledge Bases: https://tiendil.org/en/posts/ai-notes-2024-generative-knowledge-base

3. Current State: https://tiendil.org/en/posts/ai-notes-2024-the-current-state

4. Forecast: https://tiendil.org/en/posts/ai-notes-2024-prognosis

Since posts are quite long, here are key takeaways.

By analyzing the decisions of major AI developers, such as OpenAI or Google, we can make fairly accurate assumptions about the state of the AI industry.

All current progress is based on a single base technology — generative knowledge bases, which are large probabilistic models.

The development of neural networks, a.k.a. generative knowledge bases, is reaching a plateau. Future progress is likely to be incremental/evolutionary rather than explosive/revolutionary.

We shouldn't expect singularity, strong AI, or job loss to robots (in the near future).

Instead, we should expect increased labor productivity, job redistribution, turbulence in education, and shifts in the education level of future generations.

What do you think? How does the concept of "generative knowledge bases" resonate with your understanding of the current situation?

343rwerfd a day ago | next |

You're mentioning only publicly known information. The rumors mentioning radical advances behind closed doors are wild, and then you've suddenly got some stuff like deepseek or phi-4.

Rumors mention recursive "self" improvement (training) already ongoing at big scale, better AIs training lesser AIs (still powerful), to became better AIs, and the cycle restarts. Maybe o1 and o3 are just the beginning of what was choosed to make available publicly (also the newer Sonnet).

https://www.thealgorithmicbridge.com/p/this-rumor-about-gpt-...

The pace of change is actually uncertain, you could have revolutionary advances maybe 4-7 times this year, because the tide has changed and massive hardware (only available to few players) isn't a stopper anymore given that algorithms, software is taking the lead as the main force advancing AI development (anyone in the planet with a brain could make a radical leap in AI tech, anytime going forward).

https://sakana.ai/transformer-squared/

Beside the rumors and relatively (still) low impact recent innovations, we have history: remember that the technology behind gpt-2 existed basically two years before they made it public, and the theory behind that technology existed maybe 4 years before getting anything close to something practical.

All the public information is just old news. If you want to know where's everything going, you should look to where's the money going and/or where are the best teams working (deepseek, others like novasky > sky-t1).

https://novasky-ai.github.io/posts/sky-t1/

tiendil 20 hours ago | root | parent |

Rumors are rumors.

1. Positive rumors are profitable => they are targets for marketing activities, especially when huge money is at stake.

2. Humanity has a long history of false "fast technological success" rumors: thermonuclear fusion, a cryptocurrency that will disrupt the bank system, IoT that will revolutionize everything, AI boom at 80th, etc. They are almost always wrong.

3. Development cycles in IT are fast; on highly concurrent markets, they are extremely fast. The current public information in the AI industry describes almost the actual state of it. The risk "not to be the first one" is too high to hide or delay something. Such a delay may literally cost billions of investments.

kingkongjaffa a day ago | prev | next |

> OpenAI et al reaching a plateau.

Yes. The latest product releases from them all, have been chain of thought tweaks to existing models, rather than a new model entirely. Several models are perceivably the same or worse than previous models (sonnet 3.5 is sometimes worse than Opus 3.0 and Opus 3.5 is nowhere in sight.)

GPT4o is sometimes worse than base GPT4 when it was available.

The newest and largest models so far are either too expensive to run, and/or not much better than the previous best models, and so this is why they have not been released yet despite rumours that these newest models were being trained.

I would love announcements/data to the contrary.

moomoo11 17 hours ago | prev |

Meanwhile, actual software we rely on to do our real jobs continues to suck and reach new levels of enshittification as new, mostly half-baked gen ai features are added instead of better UX or customization.