Jeffrey Ladish
By day I work on the security team at Anthropic, securing large language models. By night, I think about the long term implications of technology and human development. I love to rollerblade, rollerskate, climb trees, explore tunnels and rooftop, hang out with animals, and laugh with people. I grew up in the pacific northwest and some part of my heart will always be here.
Sessions
The recent nature publication, "Dual use of artificial-intelligence-powered drug discovery" demonstrated that a machine learning model designed to predict toxicity for drug discovery could be co-opted to generate deadly chemical weapon compounds, including VX gas and novel, uncharacterized toxic chemical substances. It seems likely that there will be many more such dangerous applications of AI systems, as these systems grow more powerful. At the same time, AI systems are likely to be increasingly useful for a wide variety of applications, including defensive security. Given this tension, how can the infosec community help develop good norms around the security of AI systems? The infosec community has a lot of experience navigating tradeoffs in vulnerability disclosure norms. Can these lessons be applied to AI systems that might be capable of generating their own vulnerabilities, in both computer systems and biological systems?