When Will the AI Bubble Pop?

© 2025 This content, which contains security-related opinions and/or information, is provided for informational purposes only and should not be relied upon in any manner as professional advice, or an endorsement of any practices, products or services. There can be no guarantees or assurances that the views expressed here will be applicable for any particular […]

2 mins read

Adversarial Examples are Just Bugs, Too

We demonstrate that there exist adversarial examples which are just “bugs”: aberrations in the classifier that are not intrinsic properties of the data distribution. In particular, we give a new method for constructing adversarial examples which: Do not transfer between models, and Do not leak “non-robust features” which allow for learning, in the sense of […]

12 mins read