Artwork

Το περιεχόμενο παρέχεται από το Machine Learning Street Talk (MLST). Όλο το περιεχόμενο podcast, συμπεριλαμβανομένων των επεισοδίων, των γραφικών και των περιγραφών podcast, μεταφορτώνεται και παρέχεται απευθείας από τον Machine Learning Street Talk (MLST) ή τον συνεργάτη της πλατφόρμας podcast. Εάν πιστεύετε ότι κάποιος χρησιμοποιεί το έργο σας που προστατεύεται από πνευματικά δικαιώματα χωρίς την άδειά σας, μπορείτε να ακολουθήσετε τη διαδικασία που περιγράφεται εδώ https://el.player.fm/legal.
Player FM - Εφαρμογή podcast
Πηγαίνετε εκτός σύνδεσης με την εφαρμογή Player FM !

Large Language Models and Emergence: A Complex Systems Perspective (Prof. David C. Krakauer)

49:48
 
Μοίρασέ το
 

Manage episode 497580359 series 2803422
Το περιεχόμενο παρέχεται από το Machine Learning Street Talk (MLST). Όλο το περιεχόμενο podcast, συμπεριλαμβανομένων των επεισοδίων, των γραφικών και των περιγραφών podcast, μεταφορτώνεται και παρέχεται απευθείας από τον Machine Learning Street Talk (MLST) ή τον συνεργάτη της πλατφόρμας podcast. Εάν πιστεύετε ότι κάποιος χρησιμοποιεί το έργο σας που προστατεύεται από πνευματικά δικαιώματα χωρίς την άδειά σας, μπορείτε να ακολουθήσετε τη διαδικασία που περιγράφεται εδώ https://el.player.fm/legal.

Prof. David Krakauer, President of the Santa Fe Institute argues that we are fundamentally confusing knowledge with intelligence, especially when it comes to AI.

He defines true intelligence as the ability to do more with less—to solve novel problems with limited information. This is contrasted with current AI models, which he describes as doing less with more; they require astounding amounts of data to perform tasks that don't necessarily demonstrate true understanding or adaptation. He humorously calls this "really shit programming".

David challenges the popular notion of "emergence" in Large Language Models (LLMs). He explains that the tech community's definition—seeing a sudden jump in a model's ability to perform a task like three-digit math—is superficial. True emergence, from a complex systems perspective, involves a fundamental change in the system's internal organization, allowing for a new, simpler, and more powerful level of description. He gives the example of moving from tracking individual water molecules to using the elegant laws of fluid dynamics. For LLMs to be truly emergent, we'd need to see them develop new, efficient internal representations, not just get better at memorizing patterns as they scale.

Drawing on his background in evolutionary theory, David explains that systems like brains, and later, culture, evolved to process information that changes too quickly for genetic evolution to keep up. He calls culture "evolution at light speed" because it allows us to store our accumulated knowledge externally (in books, tools, etc.) and build upon it without corrupting the original.

This leads to his concept of "exbodiment," where we outsource our cognitive load to the world through things like maps, abacuses, or even language itself.

We create these external tools, internalize the skills they teach us, improve them, and create a feedback loop that enhances our collective intelligence.

However, he ends with a warning. While technology has historically complemented our deficient abilities, modern AI presents a new danger. Because we have an evolutionary drive to conserve energy, we will inevitably outsource our thinking to AI if we can. He fears this is already leading to a "diminution and dilution" of human thought and creativity. Just as our muscles atrophy without use, he argues our brains will too, and we risk becoming mentally dependent on these systems.

TOC:

[00:00:00] Intelligence: Doing more with less

[00:02:10] Why brains evolved: The limits of evolution

[00:05:18] Culture as evolution at light speed

[00:08:11] True meaning of emergence: "More is Different"

[00:10:41] Why LLM capabilities are not true emergence

[00:15:10] What real emergence would look like in AI

[00:19:24] Symmetry breaking: Physics vs. Life

[00:23:30] Two types of emergence: Knowledge In vs. Out

[00:26:46] Causality, agency, and coarse-graining

[00:32:24] "Exbodiment": Outsourcing thought to objects

[00:35:05] Collective intelligence & the boundary of the mind

[00:39:45] Mortal vs. Immortal forms of computation

[00:42:13] The risk of AI: Atrophy of human thought

David Krakauer

President and William H. Miller Professor of Complex Systems

https://www.santafe.edu/people/profile/david-krakauer

REFS:

Large Language Models and Emergence: A Complex Systems Perspective

David C. Krakauer, John W. Krakauer, Melanie Mitchell

https://arxiv.org/abs/2506.11135

Filmed at the Diverse Intelligences Summer Institute:

https://disi.org/

  continue reading

238 επεισόδια

Artwork
iconΜοίρασέ το
 
Manage episode 497580359 series 2803422
Το περιεχόμενο παρέχεται από το Machine Learning Street Talk (MLST). Όλο το περιεχόμενο podcast, συμπεριλαμβανομένων των επεισοδίων, των γραφικών και των περιγραφών podcast, μεταφορτώνεται και παρέχεται απευθείας από τον Machine Learning Street Talk (MLST) ή τον συνεργάτη της πλατφόρμας podcast. Εάν πιστεύετε ότι κάποιος χρησιμοποιεί το έργο σας που προστατεύεται από πνευματικά δικαιώματα χωρίς την άδειά σας, μπορείτε να ακολουθήσετε τη διαδικασία που περιγράφεται εδώ https://el.player.fm/legal.

Prof. David Krakauer, President of the Santa Fe Institute argues that we are fundamentally confusing knowledge with intelligence, especially when it comes to AI.

He defines true intelligence as the ability to do more with less—to solve novel problems with limited information. This is contrasted with current AI models, which he describes as doing less with more; they require astounding amounts of data to perform tasks that don't necessarily demonstrate true understanding or adaptation. He humorously calls this "really shit programming".

David challenges the popular notion of "emergence" in Large Language Models (LLMs). He explains that the tech community's definition—seeing a sudden jump in a model's ability to perform a task like three-digit math—is superficial. True emergence, from a complex systems perspective, involves a fundamental change in the system's internal organization, allowing for a new, simpler, and more powerful level of description. He gives the example of moving from tracking individual water molecules to using the elegant laws of fluid dynamics. For LLMs to be truly emergent, we'd need to see them develop new, efficient internal representations, not just get better at memorizing patterns as they scale.

Drawing on his background in evolutionary theory, David explains that systems like brains, and later, culture, evolved to process information that changes too quickly for genetic evolution to keep up. He calls culture "evolution at light speed" because it allows us to store our accumulated knowledge externally (in books, tools, etc.) and build upon it without corrupting the original.

This leads to his concept of "exbodiment," where we outsource our cognitive load to the world through things like maps, abacuses, or even language itself.

We create these external tools, internalize the skills they teach us, improve them, and create a feedback loop that enhances our collective intelligence.

However, he ends with a warning. While technology has historically complemented our deficient abilities, modern AI presents a new danger. Because we have an evolutionary drive to conserve energy, we will inevitably outsource our thinking to AI if we can. He fears this is already leading to a "diminution and dilution" of human thought and creativity. Just as our muscles atrophy without use, he argues our brains will too, and we risk becoming mentally dependent on these systems.

TOC:

[00:00:00] Intelligence: Doing more with less

[00:02:10] Why brains evolved: The limits of evolution

[00:05:18] Culture as evolution at light speed

[00:08:11] True meaning of emergence: "More is Different"

[00:10:41] Why LLM capabilities are not true emergence

[00:15:10] What real emergence would look like in AI

[00:19:24] Symmetry breaking: Physics vs. Life

[00:23:30] Two types of emergence: Knowledge In vs. Out

[00:26:46] Causality, agency, and coarse-graining

[00:32:24] "Exbodiment": Outsourcing thought to objects

[00:35:05] Collective intelligence & the boundary of the mind

[00:39:45] Mortal vs. Immortal forms of computation

[00:42:13] The risk of AI: Atrophy of human thought

David Krakauer

President and William H. Miller Professor of Complex Systems

https://www.santafe.edu/people/profile/david-krakauer

REFS:

Large Language Models and Emergence: A Complex Systems Perspective

David C. Krakauer, John W. Krakauer, Melanie Mitchell

https://arxiv.org/abs/2506.11135

Filmed at the Diverse Intelligences Summer Institute:

https://disi.org/

  continue reading

238 επεισόδια

Alle Folgen

×
 
Loading …

Καλώς ήλθατε στο Player FM!

Το FM Player σαρώνει τον ιστό για podcasts υψηλής ποιότητας για να απολαύσετε αυτή τη στιγμή. Είναι η καλύτερη εφαρμογή podcast και λειτουργεί σε Android, iPhone και στον ιστό. Εγγραφή για συγχρονισμό συνδρομών σε όλες τις συσκευές.

 

Οδηγός γρήγορης αναφοράς

Ακούστε αυτήν την εκπομπή ενώ εξερευνάτε
Αναπαραγωγή