I’m not totally sure how growing the synapses organically helps with that, and it seems like their solution was to just not have a stop button? Maybe the idea is that they “grow up” in this simulation where the reward is computed by the monitoring program/humans. The AI can perform any action in the simulation without having a stop button pushed, but the provided reward (i.e. discipline) after bad actions should discourage behavior like harming humans. Clearly Lisa was pulled early, given the kind of hijinks she’s gotten up to.
And this is how I found out about the Stop Button Problem. Those scenes in The Good Place featuring Janet’s kill switch are funnier when you know they’re based on a real thing in AI development.
Now run.
For anyone curious about what the “stop button problem” is:
http://www.thegadgetsite.com/the-ai-stop-button-problem/
I’m not totally sure how growing the synapses organically helps with that, and it seems like their solution was to just not have a stop button? Maybe the idea is that they “grow up” in this simulation where the reward is computed by the monitoring program/humans. The AI can perform any action in the simulation without having a stop button pushed, but the provided reward (i.e. discipline) after bad actions should discourage behavior like harming humans. Clearly Lisa was pulled early, given the kind of hijinks she’s gotten up to.
Yes, that’s right, that’s basically what I was thinking 🙂
Since the link above is now dead, I’m adding this Computerphile video instead in case it helps: https://www.youtube.com/watch?v=3TYT1QfdfsM
And this is how I found out about the Stop Button Problem. Those scenes in The Good Place featuring Janet’s kill switch are funnier when you know they’re based on a real thing in AI development.