SF Board of Supervisors Reverses Ruling On Police Using Robots In Certain Deadly Force Situations
‘The Supervisors made in wholly clear that they don’t care if officers lives are at risk’
By Evan Symon, December 7, 2022 4:41 pm
After a week of protests following the San Francisco Board of Supervisors decision to allow the SFPD to use robots for deadly force in limited situations, the Board reversed their decision on Tuesday.
The battle over allowing police robots to use deadly force dates back to last year following the passage of AB 481. The new law stated that all law enforcement in California had to make a list of all equipment considered “military,” including what they used it for. For San Francisco, which owns severely remotely operated robots, they added the following into a new policy draft:
“The robots listed in this section shall not be utilized outside of training and simulations, criminal apprehensions, critical incidents, exigent circumstances, executing a warrant or during suspicious device assessments. Robots will only be used as a deadly force option when risk of loss of life to members of the public or officers is imminent and outweighs any other force option available to SFPD.”
The language surprised many, and led to a large debate within the Board of Supervisors, who needed to approve the list. Lat week, they approved the new policy in an 8-3 vote, but only after several Supervisors were acquiesced by late amendments limiting the deadly force option authorization being limited to only a few high-ranking officers and the use of such a robot being further restricted to incidents where all other forms of de-escalation and alternative means failed.
Despite the amendments and the approval by the Supervisors, large protests occurred outside the San Francisco City Hall for a week, with citizens very much opposed to the robots being allowed to use deadly force, even in a very limited number of situations. Protestors were concerned about the policy being a slippery slope, leading to more and more usage by the SFPD over time and were nonetheless a danger.
While many more in San Francisco were supportive of the robots, the Board of Supervisors reversed their decision on that part of the policy on Tuesday, specifically citing citizen concerns.
“In a complete reversal, the Board of Supervisors just voted for a military equipment policy that bans police from using robots to kill,” said Supervisor Dean Preston in a statement on Tuesday. “Thank you to all the residents and civil rights advocates who made their voices heard! The people of San Francisco have spoken loud and clear: There is no place for killer police robots in our city. There have been more killings at the hands of police than any other year on record nationwide. We should be working on ways to decrease the use of force by local law enforcement, not giving them new tools to kill people.”
In a complete reversal, the Board of Supervisors just voted for a military equipment policy that bans police from using robots to kill. Thank you to all the residents and civil rights advocates who made their voices heard!
— Dean Preston (@DeanPreston) December 6, 2022
Supervisor Gordon Mar also noted that he switched his position following voting for the deadly force usage last week, noting in a statement that “Even with additional guardrails, I’ve grown increasingly uncomfortable with our vote & the precedent it sets for other cities without as strong a commitment to police accountability. I do not think making state violence more remote, distanced, & less human is a step forward. I do not think robots with lethal force will make us safer, or prevent or solve crimes.”
A reversal of policy after only 1 week
However, while many celebrated the Board’s decision on Tuesday and Wednesday, others noted that the issue has now gone back to the Committee, meaning that robot deadly force could still be brought back up in the near future. But, despite that possibility, many also lamented the Board’s reversal, including the SFPD, who reiterated that they wanted every worst case scenario option to be considered.
“We cannot be limited in how we are able to respond if and when the worst-case scenario incident occurs in San Francisco,” said SFPD chief Bill Scott on Wednesday. “The department was interested in having the tools necessary to prevent loss of innocent lives in an active shooter or mass casualty incident. That part of our job is to prepare for the unthinkable. We want to use our robots to save lives – not take them. To be sure, this is about neutralizing a threat by equipping a robot with a lethal option as a last case scenario, not sending an officer in on a suicide mission.”
Frank Ma, a former law enforcement official turned security advisor for businesses in San Francisco, added in a Globe interview on Wednesday, “The Supervisors made in wholly clear that they don’t care if officers lives are at risk, and would rather send in police in situations where they don’t fully know were a shooter is, and instead of sending in a robot to deal with it, they want to put more lives at risk. We have the technology, we’ve proved that it can be used responsibly. They just don’t want to hear it and let a few protestors change their minds instead of having logic and reasoning prevail.”
A decision on robot deadly force may return to the San Francisco Supervisors soon.
- New Information On Healthcare CEO Assassin Found SFPD Identified Shooter Days Before Arrest - December 14, 2024
- Schools Brace For New Law Prohibiting Notifying Parents of Students Pronoun Change - December 14, 2024
- New Reports Finds EPA Will Likely Give A Waiver For California’s 2035 Gas-Powered Car Sales Ban Mandate - December 14, 2024
The whole topic should never have been raised in the first place…
San Franfreakshow is a dumpster fire of a city….
Frank Ma does not seem to understand that any acceptance or normalization of autonomous weapons, even if well-intentioned, will accelerate the proliferation of the technology, especially its use against civilian populations. He also fails to grasp that what can be used against a technologically sophisticated citizenry, can also be used BY them. Any high school science geek can build a hunter/killer drone complete with facial recognition capabilities, with off the shelf hardware and software, right now. If law enforcement at any level is allowed to use such weapons against citizens, it becomes inevitable that such weapons will also be used against them in retaliation, and that future mass murderers may commit their crimes with software and/or a joystick instead of a firearm. It may already be too late to close this particular version of Pandora’s box.