GUEST BLOG: Gerard Cleveland is a school and youth violence prevention expert and an attorney based in Australia. He is co-author of Swift Pursuit: A Career Survival Guide for the Federal Officer. He is a frequent contributor to this blog, most recently regarding policing and drones.
PROBLEM-SOLVERS, NOT CALL RESPONDERS
No one serious about public safety would advocate for the abolishment of our police agencies. We need them in times of emergency, as well as to investigate and solve community crime and disorder problems. However, we do need to have a serious discussion about what we want our police agencies to focus on in the next few decades.
Greg Saville and I just finished teaching a two-week problem-solving class called Problem-Based Learning for Police Educators at the Law Enforcement Training Academy in South Dakota with a wonderful group of dedicated and talented police and public service participants. Much of the course focused on ‘what next’ and we had senior police and sheriff executives, graduates from our previous classes, visit to tell us that as our communities change, so too must our public service agencies.
During all our training courses, we challenge police and community leaders to answer some key questions they will face in the years ahead, two of which include the metaverse and artificial intelligence.
If you are serving in a public role – in any agency – what plans and training have you undertaken to deal with issues in the metaverse? As that virtual area of our lives grows and becomes part of our daily activities, what role will police need to take? If you are not sure that you need to address this issue yet, consider how much catching up policing agencies had to do with the arrival of crime on the web – especially the dark web – only a few decades ago. We do not want to be in the same position of catching up with technology as the metaverse extends its reach into our daily lives.
As well, what does your team know about the enhanced capabilities of privately owned drones? Many of our class members had never considered that the new threat of crime may be delivered via mini drones to your neighbourhoods. Their experience with drones generally extended to using police drones to clear buildings or watch traffic patterns, but almost no planning had been done to deal with drones being used for nefarious purposes by criminals. Greg describes one of the high-crime hotspots where his team brought SafeGrowth programming but then learned that the neighbourhood gang used drones to monitor police patrols.
Finally, how does your agency plan to address the development and growth of Artificial Intelligence (AI)? While AI will provide positive support for us in so many ways in medicine, engineering, traffic control, predictive policing, and a multitude of other ways, how have you begun to prepare – as parts of Asia have, for AI attacks on our infrastructure, our computers and even the vehicles we drive and the machines we operate?
If you find yourself scratching your head wondering, “what do I do next?” we have a suggestion. Firstly, form some small groups with your police and community members and investigate and discuss what you can expect in the next 10 years from the above developments. Secondly, and most importantly, train your people to be problem solvers and thinkers, not reactive, call responders.
But that last sentence is much harder than it sounds. We’ve been trying to change police training for the past two decades with limited success. I suspect that unless we reframe and fund strategies to address future trends, our current model of warrior responder will suddenly be quite irrelevant except in limited circumstances in the late 2020s and beyond.
“I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish. I mean with artificial intelligence we’re summoning the demon.”
— Elon Musk at MIT’s AeroAstro Centennial Symposium
by Gregory Saville
A number of years ago I partnered with my friend, brilliant computer scientist friend, Nick Bereza, and we created an automated critical infrastructure protection software called ATRIM. Later, I did a stint with a tech startup in security. Thus, I was introduced into the glitzy world of tech and software development tradeshows.
I saw firsthand an industry both exciting and volatile. Competition was fierce and missteps led to demise. Along the way, I discovered the unspoken hierarchy in the security technology world. Occupying the bottom were the junk science startups armed with a veneer of techno-gibberish. At the top was the bigboy of the high-tech playground: AI – Artificial Intelligence. At that time, security & law enforcement AI was little more than theory and conceptual White Papers.
There is an important math concept in the AI world known as the Laws of the Logarithms.
Logs are math functions used to speed up computations. One example is Moore’s Law which states that computer processing speeds double every two years. Thus, 10 units of computer memory become 20 and two years later become 40. In two decades those 10 units multiply at an exponential rate into 10,240… a thousand times higher. Logarithmic growth is the difference between narrow-AI (Apple’s “Siri” or Amazon’s “Alexi”) and deep-AI (Hal 9000 or Ava from Ex Machina)
WHY DOES IT MATTER?
Sophie the Robot from Hansen Robotics was first activated on February 14, 2016, as a robotic allegory of AI. Her accomplishments as an independent, thinking machine are well documented. She sports “scripting software, a chat system, and OpenCog, an AI system designed for general reasoning”. In other words, she can chat with you on any topic, interpret ideas, and learn from one conversation to the next.
AI experts tell us that Sophie is not conscious and is still responding based on a network of algorithms. One expert calculated her level of consciousness at about at the level of a single cell protozoa – hardly the stuff of Terminator. Deep AI is at least 200 years away, or so we are told.
I hope they told the Laws of Logarithms.
AI IN LAW ENFORCEMENT
A colleague recently forwarded research on AI in Law Enforcement and it rekindled memories of those AI White Papers at the tech trade shows from not so long ago. Today they go by titles like “Artificial Intelligence and Robotics for Law Enforcement” and “Artificial Intelligence and Predictive Policing”.
They are written by groups like Interpol, the UN Interregional Crime and Justice Research Institute, and funded by groups like the US National Science Foundation, names with considerable gravitas. They take AI in law enforcement and security seriously.
They describe new technologies, some of which echo the similar junk science and techno-gibberish I saw years ago. The technologies they describe are mostly narrow AI – voice recognition, simultaneous location and mapping software, patrol drones, and predictive policing. They barely qualify as AI. None reach Sophie’s level of sophistication. So nothing to worry about, right?
AI IN POLICING DISPATCH
Maybe…maybe not! Consider Predictive Policing. PredPol sends patrol officers to areas that it predicts will become an issue in the future. It uses weekly police calls for service to estimate where crime will happen. But calls for police service only show up in police files when residents call the police – and many minority communities will not call the police for fear or distrust. So areas of high crime, where fearful residents remain behind closed doors, never get police via PredPol since those police units will be sent elsewhere. That’s not exactly fair and equitable police services.
To make matters worse, training for Predpol officers does not include what they should do differently when they get to the predicted crime hotspot. For example, if poor lighting is creating vulnerable areas for muggers, patrol officers are not taught Crime Prevention Through Environmental Design tactics to reduce opportunities for future assaults. Thus, if they find no one at the predicted hotspot, PredPol officers simply drive on to the next call. That’s not exactly intelligent policing, artificial or otherwise.
PredPol has even been criticized for amplifying racially biased patterns of policing... and all this considers the problems from only one form of narrow AI. Can you imagine the kinds of catastrophes that might unfold if things go wrong with immeasurably more powerful deep AI within law enforcement?
A DEAL WITH THE DEVIL
Do law enforcement leaders dream that they can somehow control a sentient and fully conscious deep AI system that is immeasurably smarter than they are, linked globally to databases around the world, and capable of out-thinking and out-strategizing them?
If so, watch the Academy Award-winning film Ex Machina and see how that turns out.
Some very smart people worry about the danger of deep AI – people like Stephen Hawking, Elon Musk, and Bill Gates. And in law enforcement and security, AI is the ultimate Faustian bargain! Is it really an intellectual cache worth cashing in on?