Hacker Newsnew | past | comments | ask | show | jobs | submit | abss's commentslogin

very good direction!. we have to put science in software asap, it is interesting to see the push back but there is no way we can proceed with the curent approach that ignores that we have computers to help..


You ignore agents and programming with AI. There is huge progress in this topica, it is not that boring...


Seek meaning by helping others, or strive to conquer and shape the world through your will.


Betteridge's Law of Headlines states: "Any headline that ends in a question mark can be answered by 'no'."


But perhaps there is room for optimism: AI holds the potential to manage the complexity that has historically condemned societies to collapse.


I believe there are indications that suggest the era of big LLMs will come to an end because they will hit a price and performance wall. There is a serious possibility that they will remain an NLP tool, and real thinking will be formalized as various types of software in different niches. Multi-agent systems will resemble the early web, with thousands or millions of variations and different types of expert agents, not a single "god-like" software capable of doing everything.

It’s natural for major players to try to create a "digital god"; this is their monopolistic path. If this becomes possible, they will need to be privatized, and LLMs turned into infrastructure services for all of humanity, or else we risk ending up in a dystopia. However, there is a serious chance they won’t achieve a "digital god" and will instead, unintentionally, create a world with decentralized intelligence.

It’s better to be optimistic—we have nothing to lose, even if it’s not realistic.


Yes, details matter. The whole idea of creating AGI that is simultaneously a generalist seems more and more like wishful thinking. The reality is that to solve real problems, a large number of correct reasoning steps are required, along with the ability to make choices about which type of inference is useful at each step to avoid the explosion of complexity inherent in any brute-force approach. This suggests that we will have AI experts in different domains, perhaps superior to humans, but we will have thousands or even millions of narrow areas of expertise. To create something akin to an all-knowing superintelligent deity, we would need to combine thousands of experts, which would also consume unsustainable amounts of energy. I wouldn't bet on AGI in the coming years; it's just hype and distracts the discussion until big money finds a way to establish monopolies. However, if both UX and reasoning expertise require deep customization and specialization, we have a real chance to use AI to solve deep social problems rather than transforming society into a dystopia where humans are morally and intellectually surpassed, and those remaining are controlled by corporations that could at any moment be taken over by sociopaths.


Interesting, but we have to consider this information with skepticism since it comes from Meta. Additionally, merely open-sourcing models is insufficient; the training data must also be accessible to verify the outcomes. Furthermore, tools and applications must be freely deployable and capable of storing and sharing data under our personal control. Self-promotion: We have initiated experiments for an AI-based operating system, check AssistOS.org. We recently received a European research grant to support the improvement of AssistOS components. Contact us if you find our work interesting, wish to contribute, conduct research with us, or want to build an application for AssistOS.


It requires new best practices in governance, and any progress here is difficult. Maybe AI could help.


Let's fix the 'vulnerable distribution channels' issue. Otherwise, reglementing open source AI you just select who can abuse them. There are issues with channels that are too big, focused on making money, and ignoring social effects, we have too many systems build around extractive and mindless capitalism values. Let's look in the mirror and do the right things. AI and Open Source AIs are good mirrors that show the ugliness of various societal constructs.


The myth and intuition of a perfect machine being complete is a dominant cultural theme. I recommend Erik J. Larson's book on the myths of artificial intelligence. I see that the fundamental problem with peer review is somewhat broad. We understand inferences through induction and deduction, but we fail to understand and appreciate abduction, which is actually the basis of science and what makes us human. I believe, abstractly, this is one of the problems. I think abduction is linked to meta-rationality, another cultural difficulty.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: