Artwork

Το περιεχόμενο παρέχεται από το The Gradient. Όλο το περιεχόμενο podcast, συμπεριλαμβανομένων των επεισοδίων, των γραφικών και των περιγραφών podcast, μεταφορτώνεται και παρέχεται απευθείας από τον The Gradient ή τον συνεργάτη της πλατφόρμας podcast. Εάν πιστεύετε ότι κάποιος χρησιμοποιεί το έργο σας που προστατεύεται από πνευματικά δικαιώματα χωρίς την άδειά σας, μπορείτε να ακολουθήσετε τη διαδικασία που περιγράφεται εδώ https://el.player.fm/legal.
Player FM - Εφαρμογή podcast
Πηγαίνετε εκτός σύνδεσης με την εφαρμογή Player FM !

Subbarao Kambhampati: Planning, Reasoning, and Interpretability in the Age of LLMs

1:59:03
 
Μοίρασέ το
 

Manage episode 399949426 series 2975159
Το περιεχόμενο παρέχεται από το The Gradient. Όλο το περιεχόμενο podcast, συμπεριλαμβανομένων των επεισοδίων, των γραφικών και των περιγραφών podcast, μεταφορτώνεται και παρέχεται απευθείας από τον The Gradient ή τον συνεργάτη της πλατφόρμας podcast. Εάν πιστεύετε ότι κάποιος χρησιμοποιεί το έργο σας που προστατεύεται από πνευματικά δικαιώματα χωρίς την άδειά σας, μπορείτε να ακολουθήσετε τη διαδικασία που περιγράφεται εδώ https://el.player.fm/legal.

In episode 110 of The Gradient Podcast, Daniel Bashir speaks to Professor Subbarao Kambhampati.

Professor Kambhampati is a professor of computer science at Arizona State University. He studies fundamental problems in planning and decision making, motivated by the challenges of human-aware AI systems. He is a fellow of the Association for the Advancement of Artificial Intelligence, American Association for the Advancement of Science, and Association for Computing machinery, and was an NSF Young Investigator. He was the president of the Association for the Advancement of Artificial Intelligence, trustee of the International Joint Conference on Artificial Intelligence, and a founding board member of Partnership on AI.

Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at editor@thegradient.pub

Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter

Outline:

* (00:00) Intro

* (02:11) Professor Kambhampati’s background

* (06:07) Explanation in AI

* (18:08) What people want from explanations—vocabulary and symbolic explanations

* (21:23) The realization of new concepts in explanation—analogy and grounding

* (30:36) Thinking and language

* (31:48) Conscious and subconscious mental activity

* (36:58) Tacit and explicit knowledge

* (42:09) The development of planning as a research area

* (46:12) RL and planning

* (47:47) What makes a planning problem hard?

* (51:23) Scalability in planning

* (54:48) LLMs do not perform reasoning

* (56:51) How to show LLMs aren’t reasoning

* (59:38) External verifiers and backprompting LLMs

* (1:07:51) LLMs as cognitive orthotics, language and representations

* (1:16:45) Finding out what kinds of representations an AI system uses

* (1:31:08) “Compiling” system 2 knowledge into system 1 knowledge in LLMs

* (1:39:53) The Generative AI Paradox, reasoning and retrieval

* (1:43:48) AI as an ersatz natural science

* (1:44:03) Why AI is straying away from its engineering roots, and what constitutes engineering

* (1:58:33) Outro

Links:

* Professor Kambhampati’s Twitter and homepage

* Research and Writing — Planning and Human-Aware AI Systems

* A Validation-structure-based theory of plan modification and reuse (1990)

* Challenges of Human-Aware AI Systems (2020)

* Polanyi vs. Planning (2021)

* LLMs and Planning

* Can LLMs Really Reason and Plan? (2023)

* On the Planning Abilities of LLMs (2023)

* Other

* Changing the nature of AI research


Get full access to The Gradient at thegradientpub.substack.com/subscribe
  continue reading

131 επεισόδια

Artwork
iconΜοίρασέ το
 
Manage episode 399949426 series 2975159
Το περιεχόμενο παρέχεται από το The Gradient. Όλο το περιεχόμενο podcast, συμπεριλαμβανομένων των επεισοδίων, των γραφικών και των περιγραφών podcast, μεταφορτώνεται και παρέχεται απευθείας από τον The Gradient ή τον συνεργάτη της πλατφόρμας podcast. Εάν πιστεύετε ότι κάποιος χρησιμοποιεί το έργο σας που προστατεύεται από πνευματικά δικαιώματα χωρίς την άδειά σας, μπορείτε να ακολουθήσετε τη διαδικασία που περιγράφεται εδώ https://el.player.fm/legal.

In episode 110 of The Gradient Podcast, Daniel Bashir speaks to Professor Subbarao Kambhampati.

Professor Kambhampati is a professor of computer science at Arizona State University. He studies fundamental problems in planning and decision making, motivated by the challenges of human-aware AI systems. He is a fellow of the Association for the Advancement of Artificial Intelligence, American Association for the Advancement of Science, and Association for Computing machinery, and was an NSF Young Investigator. He was the president of the Association for the Advancement of Artificial Intelligence, trustee of the International Joint Conference on Artificial Intelligence, and a founding board member of Partnership on AI.

Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at editor@thegradient.pub

Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter

Outline:

* (00:00) Intro

* (02:11) Professor Kambhampati’s background

* (06:07) Explanation in AI

* (18:08) What people want from explanations—vocabulary and symbolic explanations

* (21:23) The realization of new concepts in explanation—analogy and grounding

* (30:36) Thinking and language

* (31:48) Conscious and subconscious mental activity

* (36:58) Tacit and explicit knowledge

* (42:09) The development of planning as a research area

* (46:12) RL and planning

* (47:47) What makes a planning problem hard?

* (51:23) Scalability in planning

* (54:48) LLMs do not perform reasoning

* (56:51) How to show LLMs aren’t reasoning

* (59:38) External verifiers and backprompting LLMs

* (1:07:51) LLMs as cognitive orthotics, language and representations

* (1:16:45) Finding out what kinds of representations an AI system uses

* (1:31:08) “Compiling” system 2 knowledge into system 1 knowledge in LLMs

* (1:39:53) The Generative AI Paradox, reasoning and retrieval

* (1:43:48) AI as an ersatz natural science

* (1:44:03) Why AI is straying away from its engineering roots, and what constitutes engineering

* (1:58:33) Outro

Links:

* Professor Kambhampati’s Twitter and homepage

* Research and Writing — Planning and Human-Aware AI Systems

* A Validation-structure-based theory of plan modification and reuse (1990)

* Challenges of Human-Aware AI Systems (2020)

* Polanyi vs. Planning (2021)

* LLMs and Planning

* Can LLMs Really Reason and Plan? (2023)

* On the Planning Abilities of LLMs (2023)

* Other

* Changing the nature of AI research


Get full access to The Gradient at thegradientpub.substack.com/subscribe
  continue reading

131 επεισόδια

Minden epizód

×
 
Loading …

Καλώς ήλθατε στο Player FM!

Το FM Player σαρώνει τον ιστό για podcasts υψηλής ποιότητας για να απολαύσετε αυτή τη στιγμή. Είναι η καλύτερη εφαρμογή podcast και λειτουργεί σε Android, iPhone και στον ιστό. Εγγραφή για συγχρονισμό συνδρομών σε όλες τις συσκευές.

 

Οδηγός γρήγορης αναφοράς