Artwork

Το περιεχόμενο παρέχεται από το The Gradient. Όλο το περιεχόμενο podcast, συμπεριλαμβανομένων των επεισοδίων, των γραφικών και των περιγραφών podcast, μεταφορτώνεται και παρέχεται απευθείας από τον The Gradient ή τον συνεργάτη της πλατφόρμας podcast. Εάν πιστεύετε ότι κάποιος χρησιμοποιεί το έργο σας που προστατεύεται από πνευματικά δικαιώματα χωρίς την άδειά σας, μπορείτε να ακολουθήσετε τη διαδικασία που περιγράφεται εδώ https://el.player.fm/legal.
Player FM - Εφαρμογή podcast
Πηγαίνετε εκτός σύνδεσης με την εφαρμογή Player FM !

Thomas Dietterich: From the Foundations

2:01:57
 
Μοίρασέ το
 

Manage episode 386790067 series 2975159
Το περιεχόμενο παρέχεται από το The Gradient. Όλο το περιεχόμενο podcast, συμπεριλαμβανομένων των επεισοδίων, των γραφικών και των περιγραφών podcast, μεταφορτώνεται και παρέχεται απευθείας από τον The Gradient ή τον συνεργάτη της πλατφόρμας podcast. Εάν πιστεύετε ότι κάποιος χρησιμοποιεί το έργο σας που προστατεύεται από πνευματικά δικαιώματα χωρίς την άδειά σας, μπορείτε να ακολουθήσετε τη διαδικασία που περιγράφεται εδώ https://el.player.fm/legal.

In episode 100 of The Gradient Podcast, Daniel Bashir speaks to Professor Thomas Dietterich.

Professor Dietterich is Distinguished Professor Emeritus in the School of Electrical Engineering and Computer Science at Oregon State University. He is a pioneer in the field of machine learning, and has authored more than 225 refereed publications and two books. His current research topics include robust artificial intelligence, robust human-AI systems, and applications in sustainability. He is a former President of the Association for the Advancement of Artificial Intelligence, and the founding President of the International Machine Learning Society. Other major roles include Executive Editor of the journal Machine Learning, co-founder of the Journal for Machine Learning Research, and program chair of AAAI 1990 and NIPS 2000. He currently serves as one of the moderators for the cs.LG category on arXiv.

Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at editor@thegradient.pub

Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter

Outline:

* (00:00) Episode 100 Note

* (02:03) Intro

* (04:23) Prof. Dietterich’s background

* (14:20) Kuhn and theory development in AI, how Prof Dietterich thinks about the philosophy of science and AI

* (20:10) Scales of understanding and sentience, grounding, observable evidence

* (23:58) Limits of statistical learning without causal reasoning, systematic understanding

* (25:48) A challenge for the ML community: testing for systematicity

* (26:13) Forming causal understandings of the world

* (28:18) Learning at the Knowledge Level

* (29:18) Background and definitions

* (32:18) Knowledge and goals, a note on LLMs

* (33:03) What it means to learn

* (41:05) LLMs as learning results of inference without learning first principles

* (43:25) System I/II thinking in humans and LLMs

* (47:23) “Routine Science”

* (47:38) Solving multiclass learning problems via error-correcting output codes

* (52:53) Error-correcting codes and redundancy

* (54:48) Why error-correcting codes work, contra intuition

* (59:18) Bias in ML

* (1:06:23) MAXQ for hierarchical RL

* (1:15:48) Computational sustainability

* (1:19:53) Project TAHMO’s moonshot

* (1:23:28) Anomaly detection for weather stations

* (1:25:33) Robustness

* (1:27:23) Motivating The Familiarity Hypothesis

* (1:27:23) Anomaly detection and self-models of competence

* (1:29:25) Measuring the health of freshwater streams

* (1:31:55) An open set problem in species detection

* (1:33:40) Issues in anomaly detection for deep learning

* (1:37:45) The Familiarity Hypothesis

* (1:40:15) Mathematical intuitions and the Familiarity Hypothesis

* (1:44:12) What’s Wrong with LLMs and What We Should Be Building Instead

* (1:46:20) Flaws in LLMs

* (1:47:25) The systems Prof Dietterich wants to develop

* (1:49:25) Hallucination/confabulation and LLMs vs knowledge bases

* (1:54:00) World knowledge and linguistic knowledge

* (1:55:07) End-to-end learning and knowledge bases

* (1:57:42) Components of an intelligent system and separability

* (1:59:06) Thinking through external memory

* (2:01:10) Outro

Links:

* Research — Fundamentals (Philosophy of AI)

* Learning at the Knowledge Level

* What Does it Mean for a Machine to Understand?

* Research – “Routine science”

* Ensemble methods in ML and error-correcting output codes

* Solving multiclass learning problems via error-correcting output codes

* An experimental comparison of bagging, boosting, and randomization

* ML Bias, Statistical Bias, and Statistical Variance of Decision Tree Algorithms

* The definitive treatment of these questions, by Gareth James

* Discovering/Exploiting structure in MDPs:

* MAXQ for hierarchical RL

* Exogenous State MDPs (paper with George Trimponias, slides)

* Research — Ecosystem Informatics and Computational Sustainability

* Project TAHMO

* Challenges for ML in Computational Sustainability

* Research — Robustness

* Steps towards robust AI (AAAI President’s Address)

* Benchmarking NN Robustness to Common Corruptions and Perturbations with Dan Hendrycks

* The familiarity hypothesis: Explaining the behavior of deep open set methods

* Recent commentary

* Toward High-Reliability AI

* What's Wrong with Large Language Models and What We Should Be Building Instead


Get full access to The Gradient at thegradientpub.substack.com/subscribe
  continue reading

135 επεισόδια

Artwork
iconΜοίρασέ το
 
Manage episode 386790067 series 2975159
Το περιεχόμενο παρέχεται από το The Gradient. Όλο το περιεχόμενο podcast, συμπεριλαμβανομένων των επεισοδίων, των γραφικών και των περιγραφών podcast, μεταφορτώνεται και παρέχεται απευθείας από τον The Gradient ή τον συνεργάτη της πλατφόρμας podcast. Εάν πιστεύετε ότι κάποιος χρησιμοποιεί το έργο σας που προστατεύεται από πνευματικά δικαιώματα χωρίς την άδειά σας, μπορείτε να ακολουθήσετε τη διαδικασία που περιγράφεται εδώ https://el.player.fm/legal.

In episode 100 of The Gradient Podcast, Daniel Bashir speaks to Professor Thomas Dietterich.

Professor Dietterich is Distinguished Professor Emeritus in the School of Electrical Engineering and Computer Science at Oregon State University. He is a pioneer in the field of machine learning, and has authored more than 225 refereed publications and two books. His current research topics include robust artificial intelligence, robust human-AI systems, and applications in sustainability. He is a former President of the Association for the Advancement of Artificial Intelligence, and the founding President of the International Machine Learning Society. Other major roles include Executive Editor of the journal Machine Learning, co-founder of the Journal for Machine Learning Research, and program chair of AAAI 1990 and NIPS 2000. He currently serves as one of the moderators for the cs.LG category on arXiv.

Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at editor@thegradient.pub

Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter

Outline:

* (00:00) Episode 100 Note

* (02:03) Intro

* (04:23) Prof. Dietterich’s background

* (14:20) Kuhn and theory development in AI, how Prof Dietterich thinks about the philosophy of science and AI

* (20:10) Scales of understanding and sentience, grounding, observable evidence

* (23:58) Limits of statistical learning without causal reasoning, systematic understanding

* (25:48) A challenge for the ML community: testing for systematicity

* (26:13) Forming causal understandings of the world

* (28:18) Learning at the Knowledge Level

* (29:18) Background and definitions

* (32:18) Knowledge and goals, a note on LLMs

* (33:03) What it means to learn

* (41:05) LLMs as learning results of inference without learning first principles

* (43:25) System I/II thinking in humans and LLMs

* (47:23) “Routine Science”

* (47:38) Solving multiclass learning problems via error-correcting output codes

* (52:53) Error-correcting codes and redundancy

* (54:48) Why error-correcting codes work, contra intuition

* (59:18) Bias in ML

* (1:06:23) MAXQ for hierarchical RL

* (1:15:48) Computational sustainability

* (1:19:53) Project TAHMO’s moonshot

* (1:23:28) Anomaly detection for weather stations

* (1:25:33) Robustness

* (1:27:23) Motivating The Familiarity Hypothesis

* (1:27:23) Anomaly detection and self-models of competence

* (1:29:25) Measuring the health of freshwater streams

* (1:31:55) An open set problem in species detection

* (1:33:40) Issues in anomaly detection for deep learning

* (1:37:45) The Familiarity Hypothesis

* (1:40:15) Mathematical intuitions and the Familiarity Hypothesis

* (1:44:12) What’s Wrong with LLMs and What We Should Be Building Instead

* (1:46:20) Flaws in LLMs

* (1:47:25) The systems Prof Dietterich wants to develop

* (1:49:25) Hallucination/confabulation and LLMs vs knowledge bases

* (1:54:00) World knowledge and linguistic knowledge

* (1:55:07) End-to-end learning and knowledge bases

* (1:57:42) Components of an intelligent system and separability

* (1:59:06) Thinking through external memory

* (2:01:10) Outro

Links:

* Research — Fundamentals (Philosophy of AI)

* Learning at the Knowledge Level

* What Does it Mean for a Machine to Understand?

* Research – “Routine science”

* Ensemble methods in ML and error-correcting output codes

* Solving multiclass learning problems via error-correcting output codes

* An experimental comparison of bagging, boosting, and randomization

* ML Bias, Statistical Bias, and Statistical Variance of Decision Tree Algorithms

* The definitive treatment of these questions, by Gareth James

* Discovering/Exploiting structure in MDPs:

* MAXQ for hierarchical RL

* Exogenous State MDPs (paper with George Trimponias, slides)

* Research — Ecosystem Informatics and Computational Sustainability

* Project TAHMO

* Challenges for ML in Computational Sustainability

* Research — Robustness

* Steps towards robust AI (AAAI President’s Address)

* Benchmarking NN Robustness to Common Corruptions and Perturbations with Dan Hendrycks

* The familiarity hypothesis: Explaining the behavior of deep open set methods

* Recent commentary

* Toward High-Reliability AI

* What's Wrong with Large Language Models and What We Should Be Building Instead


Get full access to The Gradient at thegradientpub.substack.com/subscribe
  continue reading

135 επεισόδια

Όλα τα επεισόδια

×
 
Loading …

Καλώς ήλθατε στο Player FM!

Το FM Player σαρώνει τον ιστό για podcasts υψηλής ποιότητας για να απολαύσετε αυτή τη στιγμή. Είναι η καλύτερη εφαρμογή podcast και λειτουργεί σε Android, iPhone και στον ιστό. Εγγραφή για συγχρονισμό συνδρομών σε όλες τις συσκευές.

 

Οδηγός γρήγορης αναφοράς