Show me what you've broken
See AI safety mindset. If you want to demonstrate competence at computer security, cryptography, or AI alignment theory, you should first think in terms of exposing technically demonstrable flaws in existing solutions, rather than solving entire problems yourself. Relevant Bruce Schneier quotes: “Good engineering involves thinking about how things can be made to work; the security mindset involves thinking about how things can be made to fail” and “Anyone can invent a security system that he himself cannot break. Show me what you’ve broken to demonstrate that your assertion of the system’s security means something.”
Parents:
- AI safety mindset
Asking how AI designs could go wrong, instead of imagining them going right.
This isn’t the case in modern cryptography, except perhaps for the design of ciphers. It seems at best debatable in the case of value alignment.
It seems to matter in the case of ciphers because “we couldn’t break it” is generally the best result you can hope for. In most other areas (including the rest of cryptography, and value alignment) we can say much more.
I don’t understand what you mean. In computer security generally, breaking an existing system, especially one in wide use or that had been subject to previous scrutiny, is a source of great prestige and a way of demonstrating competence. This happens far more often than somebody finding a new basic mathematical flaw in a widely used cryptographic system. What did you think Bruce Schneier meant?
(I don’t know Bruce Schneier’s view, I was replying to this post.)
I’m still not clear on what you think is false / what you think is the reality. Computer security and cryptography begin by understanding how to break systems.