Artwork

Το περιεχόμενο παρέχεται από το Audioboom and Information Security Forum Podcast. Όλο το περιεχόμενο podcast, συμπεριλαμβανομένων των επεισοδίων, των γραφικών και των περιγραφών podcast, μεταφορτώνεται και παρέχεται απευθείας από τον Audioboom and Information Security Forum Podcast ή τον συνεργάτη της πλατφόρμας podcast. Εάν πιστεύετε ότι κάποιος χρησιμοποιεί το έργο σας που προστατεύεται από πνευματικά δικαιώματα χωρίς την άδειά σας, μπορείτε να ακολουθήσετε τη διαδικασία που περιγράφεται εδώ https://el.player.fm/legal.
Player FM - Εφαρμογή podcast
Πηγαίνετε εκτός σύνδεσης με την εφαρμογή Player FM !

S30 Ep1: Dr. Andrew Newell - Deep Fakes: An attack on human identity

23:35
 
Μοίρασέ το
 

Manage episode 444128075 series 2984965
Το περιεχόμενο παρέχεται από το Audioboom and Information Security Forum Podcast. Όλο το περιεχόμενο podcast, συμπεριλαμβανομένων των επεισοδίων, των γραφικών και των περιγραφών podcast, μεταφορτώνεται και παρέχεται απευθείας από τον Audioboom and Information Security Forum Podcast ή τον συνεργάτη της πλατφόρμας podcast. Εάν πιστεύετε ότι κάποιος χρησιμοποιεί το έργο σας που προστατεύεται από πνευματικά δικαιώματα χωρίς την άδειά σας, μπορείτε να ακολουθήσετε τη διαδικασία που περιγράφεται εδώ https://el.player.fm/legal.
Today, Steve sits down with Dr. Andrew Newell, Chief Scientific Officer at the British biometrics firm iProov, for a conversation about deep fakes. As technology improves, it’s becoming ever more difficult to determine what’s real and what’s fake. Steve and Andrew discuss what this will mean going forward for security, social media platforms, and everyday technology users.
Key Takeaways:
1. Technology is the key to mitigating the threat of deep fakes, which are synthetic images or videos created to deceive.
2. Deep fakes are becoming increasingly sophisticated, making them hard to spot.
3. Newell breaks down the problem into two parts: secure identity verification and detecting synthetic images.
4. Incentives for verifying imagery will radically shift as deep fakes become more prevalent.

Tune in to hear more about:
1. Deep fake technology and its potential impact on identity verification processes (5:57)
2. Preventing deep fake images and videos using technology and algorithmic systems (9:57)
3. Deep fakes and their potential uses, including filmmaking and education (13:11)
4. Deep fakes and their impact on society, with a focus on technology’s role in verifying authenticity (18:43)

Standout Quotes:
1. “I think the urgency here — and this is the absolutely key part — is that we need to get the technology in place to make sure that the processes that rely on the genuineness of the person in imagery, that we can have something in place that we know works, that we know that we can trust, and is something that is very easy to use.” - Andrew Newell
2. “I think on the protection of identity proofing systems against the threat from deep fakes, we have a technology solution now. And the urgency is to make sure that this technology is used wherever that we need to actually guard against that threat.” - Andrew Newell
3. “And one of the most important things, if not the most important thing, is: when we think about a way to mitigate these threats, it has to be something that works for everybody. We cannot end up with a system that only works for certain groups in a society.” - Andrew Newell
Mentioned in this episode:
Read the transcript of this episode
Subscribe to the ISF Podcast wherever you listen to podcasts
Connect with us on LinkedIn and Twitter
From the Information Security Forum, the leading authority on cyber, information security, and risk management.

  continue reading

276 επεισόδια

Artwork
iconΜοίρασέ το
 
Manage episode 444128075 series 2984965
Το περιεχόμενο παρέχεται από το Audioboom and Information Security Forum Podcast. Όλο το περιεχόμενο podcast, συμπεριλαμβανομένων των επεισοδίων, των γραφικών και των περιγραφών podcast, μεταφορτώνεται και παρέχεται απευθείας από τον Audioboom and Information Security Forum Podcast ή τον συνεργάτη της πλατφόρμας podcast. Εάν πιστεύετε ότι κάποιος χρησιμοποιεί το έργο σας που προστατεύεται από πνευματικά δικαιώματα χωρίς την άδειά σας, μπορείτε να ακολουθήσετε τη διαδικασία που περιγράφεται εδώ https://el.player.fm/legal.
Today, Steve sits down with Dr. Andrew Newell, Chief Scientific Officer at the British biometrics firm iProov, for a conversation about deep fakes. As technology improves, it’s becoming ever more difficult to determine what’s real and what’s fake. Steve and Andrew discuss what this will mean going forward for security, social media platforms, and everyday technology users.
Key Takeaways:
1. Technology is the key to mitigating the threat of deep fakes, which are synthetic images or videos created to deceive.
2. Deep fakes are becoming increasingly sophisticated, making them hard to spot.
3. Newell breaks down the problem into two parts: secure identity verification and detecting synthetic images.
4. Incentives for verifying imagery will radically shift as deep fakes become more prevalent.

Tune in to hear more about:
1. Deep fake technology and its potential impact on identity verification processes (5:57)
2. Preventing deep fake images and videos using technology and algorithmic systems (9:57)
3. Deep fakes and their potential uses, including filmmaking and education (13:11)
4. Deep fakes and their impact on society, with a focus on technology’s role in verifying authenticity (18:43)

Standout Quotes:
1. “I think the urgency here — and this is the absolutely key part — is that we need to get the technology in place to make sure that the processes that rely on the genuineness of the person in imagery, that we can have something in place that we know works, that we know that we can trust, and is something that is very easy to use.” - Andrew Newell
2. “I think on the protection of identity proofing systems against the threat from deep fakes, we have a technology solution now. And the urgency is to make sure that this technology is used wherever that we need to actually guard against that threat.” - Andrew Newell
3. “And one of the most important things, if not the most important thing, is: when we think about a way to mitigate these threats, it has to be something that works for everybody. We cannot end up with a system that only works for certain groups in a society.” - Andrew Newell
Mentioned in this episode:
Read the transcript of this episode
Subscribe to the ISF Podcast wherever you listen to podcasts
Connect with us on LinkedIn and Twitter
From the Information Security Forum, the leading authority on cyber, information security, and risk management.

  continue reading

276 επεισόδια

Όλα τα επεισόδια

×
 
Loading …

Καλώς ήλθατε στο Player FM!

Το FM Player σαρώνει τον ιστό για podcasts υψηλής ποιότητας για να απολαύσετε αυτή τη στιγμή. Είναι η καλύτερη εφαρμογή podcast και λειτουργεί σε Android, iPhone και στον ιστό. Εγγραφή για συγχρονισμό συνδρομών σε όλες τις συσκευές.

 

Οδηγός γρήγορης αναφοράς