For anyone curious about what the “stop button problem” is:
I’m not totally sure how growing the synapses organically helps with that, and it seems like their solution was to just not have a stop button? Maybe the idea is that they “grow up” in this simulation where the reward is computed by the monitoring program/humans. The AI can perform any action in the simulation without having a stop button pushed, but the provided reward (i.e. discipline) after bad actions should discourage behavior like harming humans. Clearly Lisa was pulled early, given the kind of hijinks she’s gotten up to.
Yes, that’s right, that’s basically what I was thinking 🙂
Since the link above is now dead, I’m adding this Computerphile video instead in case it helps: https://www.youtube.com/watch?v=3TYT1QfdfsM
Your email address will not be published. Required fields are marked *