I’ve made similar points in the past in discussions about robot soldiers going to war. There’s an upside to these things that people insist on overlooking; they follow their programming. If you program a robot soldier to never shoot at an ambulance, then it will never shoot at an ambulance even if it’s having a really bad day. Same here, if the security robot has been programmed never to leave the public sidewalk then it’ll never leave the public sidewalk.
It’s always possible for these sorts of things to be programed to do the wrong things, of course. But at least now we have the ability to audit that sort of thing.
It’s not wrong for either to draw inspiration from the other. It’s the hypocrisy that’s wrong.