Πηγαίνετε εκτός σύνδεσης με την εφαρμογή Player FM !
Explainability, Human Aware AI & sentience in large language models | Dr. Subbarao Kambhampati
Manage episode 332733215 series 2859018
Are large language models really sentient or conscious? What is explainability (XAI) and how can we create human-aware AI systems for collaborative tasks? Dr. Subbarao Kambhampati sheds some light on these topics, generating explanations for human-in-loop AI systems and understanding 'intelligence' in context to AI systems. He is a Prof of Computer Science at Arizona State University and director of the Yochan lab at ASU where his research focuses on decision-making and planning specifically in the context of human-aware AI systems. He has received multiple awards for his research contributions. He has also been named a fellow of AAAI, AAAS, and ACM and also a distinguished alumnus from the University of Maryland and also recently IIT Madras.
Time stamps of conversations:
00:00:40 Introduction
00:01:32 What got you interested in AI?
00:07:40 Definition of intelligence that is not related to human intelligence
00:13:40 Sentience vs intelligence in modern AI systems
00:24:06 Human aware AI systems for better collaboration
00:31:25 Modern AI becoming natural science instead of an engineering task
00:37:35 Understanding symbolic concepts to generate accurate explanations
00:56:45 Need for explainability and where
01:13:00 What motivates you for research, the application associated or theoretical pursuit?
01:18:47 Research in academia vs industry
01:24:38 DALL-E performance and critiques
01:45:40 What makes for a good research thesis?
01:59:06 Different trajectories of a good CS PhD student
02:03:42 Focusing on measures vs metrics
02:15:23 Advice to students on getting started with AI
Articles referred in the conversation
AI as Natural Science?: https://cacm.acm.org/blogs/blog-cacm/261732-ai-as-an-ersatz-natural-science/fulltext
Polanyi's Revenge and AI's New Romance with Tacit Knowledge: https://cacm.acm.org/magazines/2021/2/250077-polanyis-revenge-and-ais-new-romance-with-tacit-knowledge/fulltext
More about Prof. Rao
Homepage: https://rakaposhi.eas.asu.edu/
Twitter: https://twitter.com/rao2z
About the Host:
Jay is a PhD student at Arizona State University.
Linkedin: https://www.linkedin.com/in/shahjay22/
Twitter: https://twitter.com/jaygshah22
Homepage: https://www.public.asu.edu/~jgshah1/ for any queries.
Stay tuned for upcoming webinars!
***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***
Checkout these Podcasts on YouTube: https://www.youtube.com/c/JayShahml
About the author: https://www.public.asu.edu/~jgshah1/
92 επεισόδια
Manage episode 332733215 series 2859018
Are large language models really sentient or conscious? What is explainability (XAI) and how can we create human-aware AI systems for collaborative tasks? Dr. Subbarao Kambhampati sheds some light on these topics, generating explanations for human-in-loop AI systems and understanding 'intelligence' in context to AI systems. He is a Prof of Computer Science at Arizona State University and director of the Yochan lab at ASU where his research focuses on decision-making and planning specifically in the context of human-aware AI systems. He has received multiple awards for his research contributions. He has also been named a fellow of AAAI, AAAS, and ACM and also a distinguished alumnus from the University of Maryland and also recently IIT Madras.
Time stamps of conversations:
00:00:40 Introduction
00:01:32 What got you interested in AI?
00:07:40 Definition of intelligence that is not related to human intelligence
00:13:40 Sentience vs intelligence in modern AI systems
00:24:06 Human aware AI systems for better collaboration
00:31:25 Modern AI becoming natural science instead of an engineering task
00:37:35 Understanding symbolic concepts to generate accurate explanations
00:56:45 Need for explainability and where
01:13:00 What motivates you for research, the application associated or theoretical pursuit?
01:18:47 Research in academia vs industry
01:24:38 DALL-E performance and critiques
01:45:40 What makes for a good research thesis?
01:59:06 Different trajectories of a good CS PhD student
02:03:42 Focusing on measures vs metrics
02:15:23 Advice to students on getting started with AI
Articles referred in the conversation
AI as Natural Science?: https://cacm.acm.org/blogs/blog-cacm/261732-ai-as-an-ersatz-natural-science/fulltext
Polanyi's Revenge and AI's New Romance with Tacit Knowledge: https://cacm.acm.org/magazines/2021/2/250077-polanyis-revenge-and-ais-new-romance-with-tacit-knowledge/fulltext
More about Prof. Rao
Homepage: https://rakaposhi.eas.asu.edu/
Twitter: https://twitter.com/rao2z
About the Host:
Jay is a PhD student at Arizona State University.
Linkedin: https://www.linkedin.com/in/shahjay22/
Twitter: https://twitter.com/jaygshah22
Homepage: https://www.public.asu.edu/~jgshah1/ for any queries.
Stay tuned for upcoming webinars!
***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***
Checkout these Podcasts on YouTube: https://www.youtube.com/c/JayShahml
About the author: https://www.public.asu.edu/~jgshah1/
92 επεισόδια
Όλα τα επεισόδια
×Καλώς ήλθατε στο Player FM!
Το FM Player σαρώνει τον ιστό για podcasts υψηλής ποιότητας για να απολαύσετε αυτή τη στιγμή. Είναι η καλύτερη εφαρμογή podcast και λειτουργεί σε Android, iPhone και στον ιστό. Εγγραφή για συγχρονισμό συνδρομών σε όλες τις συσκευές.