Jacob Haimes δημόσια
[search 0]
Περισσότερα
Download the App!
show episodes
 
Artwork

1
muckrAIkers

Jacob Haimes and Igor Krawczuk

Unsubscribe
Unsubscribe
Καθημερινά+
 
Join us as we dig a tiny bit deeper into the hype surrounding "AI" press releases, research papers, and more. Each episode, we'll highlight ongoing research and investigations, providing some much needed contextualization, constructive critique, and even a smidge of occasional good will teasing to the conversation, trying to find the meaning under all of this muck.
  continue reading
 
Artwork

1
Into AI Safety

Jacob Haimes

Unsubscribe
Unsubscribe
Μηνιαία
 
The Into AI Safety podcast aims to make it easier for everyone, regardless of background, to get meaningfully involved with the conversations surrounding the rules and regulations which should govern the research, development, deployment, and use of the technologies encompassed by the term "artificial intelligence" or "AI" For better formatted show notes, additional resources, and more, go to https://into-ai-safety.github.io For even more content and community engagement, head over to my Pat ...
  continue reading
 
Loading …
show series
 
Why is Mark Ruffalo talking about SB1047, and what is it anyway? Tune in for our thoughts on the now vetoed California legislation that had Big Tech scared. (00:00) - Intro (00:31) - Updates from a relatively slow week (03:32) - Disclaimer: SB1047 vetoed during recording (still worth a listen) (05:24) - What is SB1047 (12:30) - Definitions (17:18) …
  continue reading
 
OpenAI's new model is out, and we are going to have to rake through a lot of muck to get the value out of this one! ⚠ Opt out of LinkedIn's GenAI scraping ➡️ https://lnkd.in/epziUeTi (00:00) - Intro (00:25) - Other recent news (02:57) - Hot off the press (03:58) - Why might someone care? (04:52) - What is it? (06:49) - How is it being sold? (10:45)…
  continue reading
 
The almost Dr. Igor Krawczuk joins me for what is the equivalent of 4 of my previous episodes. We get into all the classics: eugenics, capitalism, philosophical toads... Need I say more? If you're interested in connecting with Igor, head on over to his website, or check out placeholder for thesis (it isn't published yet). Because the full show note…
  continue reading
 
As always, the best things come in 3s: dimensions, musketeers, pyramids, and... 3 installments of my interview with Dr. Peter Park, an AI Existential Safety Post-doctoral Fellow working with Dr. Max Tegmark at MIT. As you may have ascertained from the previous two segments of the interview, Dr. Park cofounded StakeOut.AI along with Harry Luk and on…
  continue reading
 
Join me for round 2 with Dr. Peter Park, an AI Existential Safety Postdoctoral Fellow working with Dr. Max Tegmark at MIT. Dr. Park was a cofounder of StakeOut.AI, a non-profit focused on making AI go well for humans, along with Harry Luk and one other individual, whose name has been removed due to requirements of her current position. In addition …
  continue reading
 
UPDATE: Contrary to what I say in this episode, I won't be removing any episodes that are already published from the podcast RSS feed. After getting some advice and reflecting more on my own personal goals, I have decided to shift the direction of the podcast towards accessible content regarding "AI" instead of the show's original focus. I will sti…
  continue reading
 
Dr. Peter Park is an AI Existential Safety Postdoctoral Fellow working with Dr. Max Tegmark at MIT. In conjunction with Harry Luk and one other cofounder, he founded ⁠StakeOut.AI, a non-profit focused on making AI go well for humans. 00:54 - Intro 03:15 - Dr. Park, x-risk, and AGI 08:55 - StakeOut.AI 12:05 - Governance scorecard 19:34 - Hollywood w…
  continue reading
 
Take a trip with me through the paper Large Language Models, A Survey, published on February 9th of 2024. All figures and tables mentioned throughout the episode can be found on the Into AI Safety podcast website. 00:36 - Intro and authors 01:50 - My takes and paper structure 04:40 - Getting to LLMs 07:27 - Defining LLMs & emergence 12:12 - Overvie…
  continue reading
 
Esben reviews an application that I would soon submit for Open Philanthropy's Career Transitition Funding opportunity. Although I didn't end up receiving the funding, I do think that this episode can be a valuable resource for both others and myself when applying for funding in the future. Head over to Apart Research's website to check out their wo…
  continue reading
 
Before I begin with the paper-distillation based minisodes, I figured we would go over best practices for reading research papers. I go through the anatomy of typical papers, and some generally applicable advice. 00:56 - Anatomy of a paper 02:38 - Most common advice 05:24 - Reading sparsity and path 07:30 - Notes and motivation Links to all article…
  continue reading
 
Join our hackathon group for the second episode in the Evals November 2023 Hackathon subseries. In this episode, we solidify our goals for the hackathon after some preliminary experimentation and ideation. Check out Stellaric's website, or follow them on Twitter. 01:53 - Meeting starts 05:05 - Pitch: extension of locked models 23:23 - Pitch: retroa…
  continue reading
 
I provide my thoughts and recommendations regarding personal professional portfolios. 00:35 - Intro to portfolios 01:42 - Modern portfolios 02:27 - What to include 04:38 - Importance of visual 05:50 - The "About" page 06:25 - Tools 08:12 - Future of "Minisodes" Links to all articles/papers which are mentioned throughout the episode can be found bel…
  continue reading
 
Darryl and I discuss his background, how he became interested in machine learning, and a project we are currently working on investigating the penalization of polysemanticity during the training of neural networks. Check out a diagram of the decoder task used for our research! 01:46 - Interview begins 02:14 - Supernovae classification 08:58 - Penal…
  continue reading
 
A summary and reflections on the path I have taken to get this podcast started, including some resources recommendations for others who want to do something similar. Links to all articles/papers which are mentioned throughout the episode can be found below, in order of their appearance. LessWrong Spotify for Podcasters Into AI Safety podcast websit…
  continue reading
 
This episode kicks off our first subseries, which will consist of recordings taken during my team's meetings for the AlignmentJams Evals Hackathon in November of 2023. Our team won first place, so you'll be listening to the process which, at the end of the day, turned out to be pretty good. Check out Apart Research, the group that runs the Alignmen…
  continue reading
 
In this minisode I give some tips for staying up-to-date in the everchanging landscape of AI. I would like to point out that I am constantly iterating on these strategies, tools, and sources, so it is likely that I will make an update episode in the future. Links to all articles/papers which are mentioned throughout the episode can be found below, …
  continue reading
 
Alice Rigg, a mechanistic interpretability researcher from Ottawa, Canada, joins me to discuss their path and the applications process for research/mentorship programs. Join the Mech Interp Discord server and attend reading groups at 11:00am on Wednesdays (Mountain Time)! Check out Alice's website. Links to all articles/papers which are mentioned t…
  continue reading
 
We're back after a month-long hiatus with a podcast refactor and advice on the applications process for research/mentorship programs. Check out the About page on the Into AI Safety website for a summary of the logistics updates. Links to all articles/papers which are mentioned throughout the episode can be found below, in order of their appearance.…
  continue reading
 
This episode is a brief overview of the major takeaways I had from attending EAG Boston 2023, and an update on my plans for the podcast moving forward. TL;DL Starting in early December (2023), I will be uploading episodes on a biweekly basis (day TBD). I won't be releasing another episode until then, so that I can build a cache of episodes up. Duri…
  continue reading
 
In this episode I discuss my initial research proposal for the 2024 Winter AI Safety Camp with one of the individuals who helps facilitate the program, Remmelt Ellen. The proposal is titled The Effect of Machine Learning on Bioengineered Pandemic Risk. A doc-capsule of the proposal at the time of this recording can be found at this link. Links to a…
  continue reading
 
Welcome to the Into AI Safety podcast! In this episode I provide reasoning for why I am starting this podcast, what I am trying to accomplish with it, and a little bit of background on how I got here. Please email all inquiries and suggestions to intoaisafety@gmail.com.Από τον Jacob Haimes
  continue reading
 
Loading …

Οδηγός γρήγορης αναφοράς