X

SFPD Pushes For Police Robots For Deadly Force Situations

Similar proposal failed in Oakland last month

Beautiful view of business center in downtown San Francisco at sunset. (Photo: f11photo/Shutterstock)

According to a new policy draft released earlier this week, the San Francisco Police Department continued to push forward a proposal that would allow them to use robots to use deadly force in situations where lives are at risk, with the matter going before the  Board of Supervisors next week.

Specifically, in a use of equipment policy update, the language of the draft reads:

“The robots listed in this section shall not be utilized outside of training and simulations, criminal apprehensions, critical incidents, exigent circumstances, executing a warrant or during suspicious device assessments. Robots will only be used as a deadly force option when risk of loss of life to members of the public or officers is imminent and outweighs any other force option available to SFPD.”

That particular part of the proposal has been the subject of discussions between the San Francisco Board of Supervisors and the SFPD for several weeks. While several Supervisors have criticized the SFPD for not including some weapons under the “military-style” definition, the usage of robots to potentially take out those lethally threatening the public or officers raised the ire of many, including Supervisor Aaron Peskin, who added into the draft, “Robots shall not be used as a use of force against any person.”

“The original policy they submitted was actually silent on whether robots could deploy lethal force,” noted Peskin. “We decided to approve the SFPD’s caveated guidelines because the department had made the case that there could be scenarios where deployment of lethal force was the only option.”

The SFPD immediately went around that suggestion, replaced that sentence with the current wording, and soon got approval of a Supervisors committee after convincing members that robot lethal force via an explosion could be needed in certain situations. The SFPD has repeatedly stressed that such a scenario would be a “rare and exceptional circumstance.”

“SFPD does not have any sort of specific plan in place as the unusually dangerous or spontaneous operations where SFPD’s need to deliver deadly force via robot would be a rare and exceptional circumstance,” said the SFPD in a statement.

While using an explosive device sent via a police robot to take out a hostile person has not been used in California before, it has occasionally been used elsewhere in the U.S. Most notably, such a robot was used in Dallas in 2016 against a sniper targeting police officers. In October, Oakland police attempted to pass a similar policy to the SFD proposal, but came up short on support because of lawmaker and public outcry. San Francisco is only the latest city to propose this, with others currently deciding where they fall on the issue due to a new state law on military equipment.

“We’ll be seeing police and city decisions on all sorts of military weapons and equipment used by police due to AB 481,” Wesley Riggs, a security merchandise supplier who focuses on non-lethal items, told the Globe Friday. “Everyone now has to make a list of what equipment they have that fall under  being ‘military,’ and that includes what they are used for, hence why everyone is trying to define what their robots are for. That’s really why we heard from Oakland and San Francisco, and will hear from others. Everyone is complying, and they don’t want to give up anything that could potentially save the lives of innocent people.”

“This isn’t just a sudden decision to start bombing bad guys with robots, as many have claimed. No, this is just police departments having to legally outline things and many departments not wanting to give up a functionality that could be used by them to save innocent people. That’s it really.”

The SFPD proposal is set to be heard formally by the Board on November 29th.

Spread the news:

 RELATED ARTICLES

Evan Symon: Evan V. Symon is the Senior Editor for the California Globe. Prior to the Globe, he reported for the Pasadena Independent, the Cleveland Plain Dealer, and was head of the Personal Experiences section at Cracked. He can be reached at evan@californiaglobe.com.

View Comments (2)

  • That's nuts. Asimov's Law of Robotics clearly states:

    First LawA robot may not injure a human being or, through inaction, allow a human being to come to harm.Second LawA robot must obey the orders given it by human beings except where such orders would conflict with the First Law.Third LawA robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Related Post

This website uses cookies.