Artwork

Το περιεχόμενο παρέχεται από το Demetrios Brinkmann. Όλο το περιεχόμενο podcast, συμπεριλαμβανομένων των επεισοδίων, των γραφικών και των περιγραφών podcast, μεταφορτώνεται και παρέχεται απευθείας από τον Demetrios Brinkmann ή τον συνεργάτη της πλατφόρμας podcast. Εάν πιστεύετε ότι κάποιος χρησιμοποιεί το έργο σας που προστατεύεται από πνευματικά δικαιώματα χωρίς την άδειά σας, μπορείτε να ακολουθήσετε τη διαδικασία που περιγράφεται εδώ https://el.player.fm/legal.
Player FM - Εφαρμογή podcast
Πηγαίνετε εκτός σύνδεσης με την εφαρμογή Player FM !

Detecting Harmful Content at Scale // Matar Haller // #246

51:27
 
Μοίρασέ το
 

Manage episode 428042439 series 3241972
Το περιεχόμενο παρέχεται από το Demetrios Brinkmann. Όλο το περιεχόμενο podcast, συμπεριλαμβανομένων των επεισοδίων, των γραφικών και των περιγραφών podcast, μεταφορτώνεται και παρέχεται απευθείας από τον Demetrios Brinkmann ή τον συνεργάτη της πλατφόρμας podcast. Εάν πιστεύετε ότι κάποιος χρησιμοποιεί το έργο σας που προστατεύεται από πνευματικά δικαιώματα χωρίς την άδειά σας, μπορείτε να ακολουθήσετε τη διαδικασία που περιγράφεται εδώ https://el.player.fm/legal.

Matar Haller is the VP of Data & AI at ActiveFence, where her teams own the end-to-end automated detection of harmful content at scale, regardless of the abuse area or media type. The work they do here is engaging, impactful, and tough, and Matar is grateful for the people she gets to do it with.

AI For Good - Detecting Harmful Content at Scale // MLOps Podcast #246 with Matar Haller, VP of Data & AI at ActiveFence. // Abstract One of the biggest challenges facing online platforms today is detecting harmful content and malicious behavior. Platform abuse poses brand and legal risks, harms the user experience, and often represents a blurred line between online and offline harm. So how can online platforms tackle abuse in a world where bad actors are continuously changing their tactics and developing new ways to avoid detection? // Bio Matar Haller leads the Data & AI Group at ActiveFence, where her teams are responsible for the data, algorithms, and infrastructure that fuel ActiveFence’s ability to ingest, detect, and analyze harmful activity and malicious content at scale in an ever-changing, complex online landscape. Matar holds a Ph.D. in Neuroscience from the University of California at Berkeley, where she recorded and analyzed signals from electrodes surgically implanted in human brains. Matar is passionate about expanding leadership opportunities for women in STEM fields and has three children who surprise and inspire her every day. // MLOps Jobs board https://mlops.pallet.xyz/jobs // MLOps Swag/Merch https://mlops-community.myshopify.com/ // Related Links activefence.comhttps://www.youtube.com/@ActiveFence --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Matar on LinkedIn: https://www.linkedin.com/company/11682234/admin/feed/posts/ Timestamps: [00:00] Matar's preferred coffee [00:13] Takeaways [01:39] The talk that stood out [06:15] Online hate speech challenges [08:13] Evaluate harmful media API [09:58] Content moderation: AI models [11:36] Optimizing speed and accuracy [13:36] Cultural reference AI training [15:55] Functional Tests [20:05] Continuous adaptation of AI [26:43] AI detection concerns [29:12] Fine-Tuned vs Off-the-Shelf [32:04] Monitoring Transformer Model Hallucinations [34:08] Auditing process ensures accuracy [38:38] Testing strategies for ML [40:05] Modeling hate speech deployment [42:19] Improving production code quality [43:52] Finding balance in Moderation [47:23] Model's expertise: Cultural Sensitivity [50:26] Wrap up

  continue reading

389 επεισόδια

Artwork
iconΜοίρασέ το
 
Manage episode 428042439 series 3241972
Το περιεχόμενο παρέχεται από το Demetrios Brinkmann. Όλο το περιεχόμενο podcast, συμπεριλαμβανομένων των επεισοδίων, των γραφικών και των περιγραφών podcast, μεταφορτώνεται και παρέχεται απευθείας από τον Demetrios Brinkmann ή τον συνεργάτη της πλατφόρμας podcast. Εάν πιστεύετε ότι κάποιος χρησιμοποιεί το έργο σας που προστατεύεται από πνευματικά δικαιώματα χωρίς την άδειά σας, μπορείτε να ακολουθήσετε τη διαδικασία που περιγράφεται εδώ https://el.player.fm/legal.

Matar Haller is the VP of Data & AI at ActiveFence, where her teams own the end-to-end automated detection of harmful content at scale, regardless of the abuse area or media type. The work they do here is engaging, impactful, and tough, and Matar is grateful for the people she gets to do it with.

AI For Good - Detecting Harmful Content at Scale // MLOps Podcast #246 with Matar Haller, VP of Data & AI at ActiveFence. // Abstract One of the biggest challenges facing online platforms today is detecting harmful content and malicious behavior. Platform abuse poses brand and legal risks, harms the user experience, and often represents a blurred line between online and offline harm. So how can online platforms tackle abuse in a world where bad actors are continuously changing their tactics and developing new ways to avoid detection? // Bio Matar Haller leads the Data & AI Group at ActiveFence, where her teams are responsible for the data, algorithms, and infrastructure that fuel ActiveFence’s ability to ingest, detect, and analyze harmful activity and malicious content at scale in an ever-changing, complex online landscape. Matar holds a Ph.D. in Neuroscience from the University of California at Berkeley, where she recorded and analyzed signals from electrodes surgically implanted in human brains. Matar is passionate about expanding leadership opportunities for women in STEM fields and has three children who surprise and inspire her every day. // MLOps Jobs board https://mlops.pallet.xyz/jobs // MLOps Swag/Merch https://mlops-community.myshopify.com/ // Related Links activefence.comhttps://www.youtube.com/@ActiveFence --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Matar on LinkedIn: https://www.linkedin.com/company/11682234/admin/feed/posts/ Timestamps: [00:00] Matar's preferred coffee [00:13] Takeaways [01:39] The talk that stood out [06:15] Online hate speech challenges [08:13] Evaluate harmful media API [09:58] Content moderation: AI models [11:36] Optimizing speed and accuracy [13:36] Cultural reference AI training [15:55] Functional Tests [20:05] Continuous adaptation of AI [26:43] AI detection concerns [29:12] Fine-Tuned vs Off-the-Shelf [32:04] Monitoring Transformer Model Hallucinations [34:08] Auditing process ensures accuracy [38:38] Testing strategies for ML [40:05] Modeling hate speech deployment [42:19] Improving production code quality [43:52] Finding balance in Moderation [47:23] Model's expertise: Cultural Sensitivity [50:26] Wrap up

  continue reading

389 επεισόδια

Όλα τα επεισόδια

×
 
Loading …

Καλώς ήλθατε στο Player FM!

Το FM Player σαρώνει τον ιστό για podcasts υψηλής ποιότητας για να απολαύσετε αυτή τη στιγμή. Είναι η καλύτερη εφαρμογή podcast και λειτουργεί σε Android, iPhone και στον ιστό. Εγγραφή για συγχρονισμό συνδρομών σε όλες τις συσκευές.

 

Οδηγός γρήγορης αναφοράς