Artwork

Το περιεχόμενο παρέχεται από το The Nonlinear Fund. Όλο το περιεχόμενο podcast, συμπεριλαμβανομένων των επεισοδίων, των γραφικών και των περιγραφών podcast, μεταφορτώνεται και παρέχεται απευθείας από τον The Nonlinear Fund ή τον συνεργάτη της πλατφόρμας podcast. Εάν πιστεύετε ότι κάποιος χρησιμοποιεί το έργο σας που προστατεύεται από πνευματικά δικαιώματα χωρίς την άδειά σας, μπορείτε να ακολουθήσετε τη διαδικασία που περιγράφεται εδώ https://el.player.fm/legal.
Player FM - Εφαρμογή podcast
Πηγαίνετε εκτός σύνδεσης με την εφαρμογή Player FM !

LW - [Paper] AI Sandbagging: Language Models can Strategically Underperform on Evaluations by Teun van der Weij

3:18
 
Μοίρασέ το
 

Fetch error

Hmmm there seems to be a problem fetching this series right now. Last successful fetch was on September 22, 2024 16:12 (5d ago)

What now? This series will be checked again in the next day. If you believe it should be working, please verify the publisher's feed link below is valid and includes actual episode links. You can contact support to request the feed be immediately fetched.

Manage episode 423504287 series 3337129
Το περιεχόμενο παρέχεται από το The Nonlinear Fund. Όλο το περιεχόμενο podcast, συμπεριλαμβανομένων των επεισοδίων, των γραφικών και των περιγραφών podcast, μεταφορτώνεται και παρέχεται απευθείας από τον The Nonlinear Fund ή τον συνεργάτη της πλατφόρμας podcast. Εάν πιστεύετε ότι κάποιος χρησιμοποιεί το έργο σας που προστατεύεται από πνευματικά δικαιώματα χωρίς την άδειά σας, μπορείτε να ακολουθήσετε τη διαδικασία που περιγράφεται εδώ https://el.player.fm/legal.
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Paper] AI Sandbagging: Language Models can Strategically Underperform on Evaluations, published by Teun van der Weij on June 13, 2024 on LessWrong.
We have written a paper on sandbagging for which we present the abstract and brief results in this post. See the paper for more details. Tweet thread here.
Abstract
Trustworthy capability evaluations are crucial for ensuring the safety of AI systems, and are becoming a key component of AI regulation. However, the developers of an AI system, or the AI system itself, may have incentives for evaluations to understate the AI's actual capability. These conflicting interests lead to the problem of sandbagging - which we define as strategic underperformance on an evaluation. In this paper we assess sandbagging capabilities in contemporary language models (LMs).
We prompt frontier LMs, like GPT-4 and Claude 3 Opus, to selectively underperform on dangerous capability evaluations, while maintaining performance on general (harmless) capability evaluations. Moreover, we find that models can be fine-tuned, on a synthetic dataset, to hide specific capabilities unless given a password. This behaviour generalizes to high-quality, held-out benchmarks such as WMDP.
In addition, we show that both frontier and smaller models can be prompted, or password-locked, to target specific scores on a capability evaluation. Even more, we found that a capable password-locked model (Llama 3 70b) is reasonably able to emulate a less capable model (Llama 2 7b). Overall, our results suggest that capability evaluations are vulnerable to sandbagging.
This vulnerability decreases the trustworthiness of evaluations, and thereby undermines important safety decisions regarding the development and deployment of advanced AI systems.
We publish our code at https://github.com/TeunvdWeij/sandbagging
Results
Evaluators focus on dangerous capability evaluations, where high performance may trigger regulatory interventions. We demonstrate that LMs can be prompted or fine-tuned to selectively underperform on dangerous capability evaluations while maintaining overall performance.
Even smaller models can do this using "password-locking", which is fine-tuning models to hide their actual capabilities unless given a password. We show that when Mistral-7b or Llama3-8b are password locked on synthetic data, they still hide capabilities on a high-quality held-out benchmark (WMDP).
If a seemingly competent AI system scores surprisingly low on a capability benchmark, detecting sandbagging is easy. However, we find that models can accurately aim for specific scores on benchmarks.
Similarly, inconsistent or random performance can be detected. More credibly, strong models could hide capabilities by mimicking weaker models. We demonstrate this capability by successfully fine-tuning Llama 3 70b to emulate Llama 2 7b only when the password is not present.
Our work suggests that capability evaluations are vulnerable to sandbagging, which is bad news, but good to know. In our following project, we will work on how to mitigate this problem. Reach out if you are interested in working on this.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
  continue reading

1851 επεισόδια

Artwork
iconΜοίρασέ το
 

Fetch error

Hmmm there seems to be a problem fetching this series right now. Last successful fetch was on September 22, 2024 16:12 (5d ago)

What now? This series will be checked again in the next day. If you believe it should be working, please verify the publisher's feed link below is valid and includes actual episode links. You can contact support to request the feed be immediately fetched.

Manage episode 423504287 series 3337129
Το περιεχόμενο παρέχεται από το The Nonlinear Fund. Όλο το περιεχόμενο podcast, συμπεριλαμβανομένων των επεισοδίων, των γραφικών και των περιγραφών podcast, μεταφορτώνεται και παρέχεται απευθείας από τον The Nonlinear Fund ή τον συνεργάτη της πλατφόρμας podcast. Εάν πιστεύετε ότι κάποιος χρησιμοποιεί το έργο σας που προστατεύεται από πνευματικά δικαιώματα χωρίς την άδειά σας, μπορείτε να ακολουθήσετε τη διαδικασία που περιγράφεται εδώ https://el.player.fm/legal.
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Paper] AI Sandbagging: Language Models can Strategically Underperform on Evaluations, published by Teun van der Weij on June 13, 2024 on LessWrong.
We have written a paper on sandbagging for which we present the abstract and brief results in this post. See the paper for more details. Tweet thread here.
Abstract
Trustworthy capability evaluations are crucial for ensuring the safety of AI systems, and are becoming a key component of AI regulation. However, the developers of an AI system, or the AI system itself, may have incentives for evaluations to understate the AI's actual capability. These conflicting interests lead to the problem of sandbagging - which we define as strategic underperformance on an evaluation. In this paper we assess sandbagging capabilities in contemporary language models (LMs).
We prompt frontier LMs, like GPT-4 and Claude 3 Opus, to selectively underperform on dangerous capability evaluations, while maintaining performance on general (harmless) capability evaluations. Moreover, we find that models can be fine-tuned, on a synthetic dataset, to hide specific capabilities unless given a password. This behaviour generalizes to high-quality, held-out benchmarks such as WMDP.
In addition, we show that both frontier and smaller models can be prompted, or password-locked, to target specific scores on a capability evaluation. Even more, we found that a capable password-locked model (Llama 3 70b) is reasonably able to emulate a less capable model (Llama 2 7b). Overall, our results suggest that capability evaluations are vulnerable to sandbagging.
This vulnerability decreases the trustworthiness of evaluations, and thereby undermines important safety decisions regarding the development and deployment of advanced AI systems.
We publish our code at https://github.com/TeunvdWeij/sandbagging
Results
Evaluators focus on dangerous capability evaluations, where high performance may trigger regulatory interventions. We demonstrate that LMs can be prompted or fine-tuned to selectively underperform on dangerous capability evaluations while maintaining overall performance.
Even smaller models can do this using "password-locking", which is fine-tuning models to hide their actual capabilities unless given a password. We show that when Mistral-7b or Llama3-8b are password locked on synthetic data, they still hide capabilities on a high-quality held-out benchmark (WMDP).
If a seemingly competent AI system scores surprisingly low on a capability benchmark, detecting sandbagging is easy. However, we find that models can accurately aim for specific scores on benchmarks.
Similarly, inconsistent or random performance can be detected. More credibly, strong models could hide capabilities by mimicking weaker models. We demonstrate this capability by successfully fine-tuning Llama 3 70b to emulate Llama 2 7b only when the password is not present.
Our work suggests that capability evaluations are vulnerable to sandbagging, which is bad news, but good to know. In our following project, we will work on how to mitigate this problem. Reach out if you are interested in working on this.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
  continue reading

1851 επεισόδια

Toate episoadele

×
 
Loading …

Καλώς ήλθατε στο Player FM!

Το FM Player σαρώνει τον ιστό για podcasts υψηλής ποιότητας για να απολαύσετε αυτή τη στιγμή. Είναι η καλύτερη εφαρμογή podcast και λειτουργεί σε Android, iPhone και στον ιστό. Εγγραφή για συγχρονισμό συνδρομών σε όλες τις συσκευές.

 

Οδηγός γρήγορης αναφοράς