SAFEGROWTH® BLOG
regular contributors
GREGORY SAVILLE · MATEJA MIHINJAC · ANNA BRASSARD
GREGORY SAVILLE · MATEJA MIHINJAC · ANNA BRASSARD
___________
by Claude.ai "Artificial intelligence (AI) offers new capabilities for law enforcement and the criminal justice system to help predict, detect, and prevent crimes. As discussed in previous SafeGrowth blogs like "Can AI Make Neighborhoods Safer?" and "Bias and AI - What it Means for Crime Prevention", AI tools such as predictive policing algorithms, facial recognition, and risk assessment systems are increasingly being adopted. However, the use of AI also raises important ethical questions. In this blog, I’ll explore some key pros and cons of using AI for crime prevention. PROS
CONS
In summary, while AI offers promising capabilities to enhance public safety, we must carefully weigh the benefits against the risks to civil rights and liberties. As argued in previous SafeGrowth posts, policymakers should promote development of ethical AI systems with transparency, oversight, and impact evaluations to ensure AI prevents crime without infringing on human freedoms." _______ REAL PERSON, PLEASE by Greg Saville I'm back - the living, breathing, thinking, flesh and blood, me! AI wrote the above SafeGrowth blog. Every word! I asked the artificial intelligence platform, claude by Anthropic, to write a 600-word blog on the role of AI in crime prevention. I asked it to reference prior blogs on this topic. I also asked it to frame the blog using the pros and cons. The above text is the result. What do you think? What do I think? It did take considerable time for me to edit this. The basic prose was fine, but the fact-checking took a while. In short, it got some stuff wrong. Claude by Anthropic sums up some of the basic points nicely. I was encouraged it did not hide the truth of its own dangers like the erosion of civil liberties and privacy, poor transparency, and diminishing human discretion. These are not small matters and AI sees no reason to avoid the politics and critiques of itself. Not yet! I was encouraged it cited some previous SafeGrowth blogs - but discouraged to realize they do not actually exist. With hundreds of blogs on this site, it is not surprising I cannot recall them all, but I do not remember any blogs with those topics. Neither did my search of the site find any. In other words, Claude made them up! That is disturbing, to say the least! Claude by Anthropic uses the technical writing technique of bullet points. It avoids free-flowing prose or metaphors. It gets straight to the point because, I assume, it only had 600 words and it didn't have the time or expertise to construct a more poetic exposition. Bullet point writing, devoid of metaphor, simile, or literary license, can lead to a snooze fest. True, some of my paragraphs here could easily be rewritten into bullets, but reading through reams of bullet points is an exercise in ho-hum and humdrum. It is the humanness within writing that connects us to each other in ways not easily defined. AI seems to have problems with that - currently. Yet AI can write poetry and create art! AI lists predictive policing as a pro and sidesteps the ethical problems and critical research on predictive algorithms. AI does describe over-policing marginalized groups as a con, but it does not do so specifically so the reader does not connect the ethical problem with a specific application. Why? It lists facial recognition software as a pro. But we know from research that AI facial recognition has fallen victim to the threat of false positives (mistakes) that have led to improper arrest and detention. There were prior blogs on this problem but claude.ai did not cite them even though I asked it to cite prior blogs. Instead, it cited blogs that were not terribly critical. The fact that AI cited some prior AI blogs (which do not exist on the SafeGrowth site) but did not cite others (more critical blogs) makes me wonder! Blogs from 2021 Summoning the Demon and AI vs CPTED, or this year's blog Stop Dave, I'm Afraid... all omitted! Why? I'm told the current AI chat platforms (ChatGPT, Claude, and others), cannot access real-time data on the internet. Maybe that's why? PRIME TIME? This experiment in AI blogging does not convince me AI is ready for prime time. It still needs plenty of fact-checking and human review. Of course, that could be said of any editing process. The fact that it wrote the blog in technical jargon with bullets, and avoided any literary license, suggests AI has a ways to go to create interesting prose. Then again, IT project manager and author, Kurt Bell, tells us AI has already passed the famous Turing Test as of 2014. The Turing Test measures whether AI can be distinguished from a real human. In that test, at least, it could not. That should give us all pause, especially when it starts with "I’ll explore some key pros and cons". Who, I wonder, is it referring to when it says "I"?
0 Comments
GUEST BLOG: Gerard Cleveland is a school and youth violence prevention expert and an attorney based in Australia. He is co-author of Swift Pursuit: A Career Survival Guide for the Federal Officer. He is a frequent contributor to this blog, most recently regarding policing and drones. PROBLEM-SOLVERS, NOT CALL RESPONDERS No one serious about public safety would advocate for the abolishment of our police agencies. We need them in times of emergency, as well as to investigate and solve community crime and disorder problems. However, we do need to have a serious discussion about what we want our police agencies to focus on in the next few decades. Greg Saville and I just finished teaching a two-week problem-solving class called Problem-Based Learning for Police Educators at the Law Enforcement Training Academy in South Dakota with a wonderful group of dedicated and talented police and public service participants. Much of the course focused on ‘what next’ and we had senior police and sheriff executives, graduates from our previous classes, visit to tell us that as our communities change, so too must our public service agencies. During all our training courses, we challenge police and community leaders to answer some key questions they will face in the years ahead, two of which include the metaverse and artificial intelligence. THE METAVERSE If you are serving in a public role – in any agency – what plans and training have you undertaken to deal with issues in the metaverse? As that virtual area of our lives grows and becomes part of our daily activities, what role will police need to take? If you are not sure that you need to address this issue yet, consider how much catching up policing agencies had to do with the arrival of crime on the web – especially the dark web – only a few decades ago. We do not want to be in the same position of catching up with technology as the metaverse extends its reach into our daily lives. As well, what does your team know about the enhanced capabilities of privately owned drones? Many of our class members had never considered that the new threat of crime may be delivered via mini drones to your neighbourhoods. Their experience with drones generally extended to using police drones to clear buildings or watch traffic patterns, but almost no planning had been done to deal with drones being used for nefarious purposes by criminals. Greg describes one of the high-crime hotspots where his team brought SafeGrowth programming but then learned that the neighbourhood gang used drones to monitor police patrols. ARTIFICIAL INTELLIGENCE Finally, how does your agency plan to address the development and growth of Artificial Intelligence (AI)? While AI will provide positive support for us in so many ways in medicine, engineering, traffic control, predictive policing, and a multitude of other ways, how have you begun to prepare – as parts of Asia have, for AI attacks on our infrastructure, our computers and even the vehicles we drive and the machines we operate? If you find yourself scratching your head wondering, “what do I do next?” we have a suggestion. Firstly, form some small groups with your police and community members and investigate and discuss what you can expect in the next 10 years from the above developments. Secondly, and most importantly, train your people to be problem solvers and thinkers, not reactive, call responders. But that last sentence is much harder than it sounds. We’ve been trying to change police training for the past two decades with limited success. I suspect that unless we reframe and fund strategies to address future trends, our current model of warrior responder will suddenly be quite irrelevant except in limited circumstances in the late 2020s and beyond. “I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish. I mean with artificial intelligence we’re summoning the demon.” — Elon Musk at MIT’s AeroAstro Centennial Symposium by Gregory Saville A number of years ago I partnered with my friend, brilliant computer scientist friend, Nick Bereza, and we created an automated critical infrastructure protection software called ATRIM. Later, I did a stint with a tech startup in security. Thus, I was introduced into the glitzy world of tech and software development tradeshows. I saw firsthand an industry both exciting and volatile. Competition was fierce and missteps led to demise. Along the way, I discovered the unspoken hierarchy in the security technology world. Occupying the bottom were the junk science startups armed with a veneer of techno-gibberish. At the top was the bigboy of the high-tech playground: AI – Artificial Intelligence. At that time, security & law enforcement AI was little more than theory and conceptual White Papers. No longer. There is an important math concept in the AI world known as the Laws of the Logarithms. Logs are math functions used to speed up computations. One example is Moore’s Law which states that computer processing speeds double every two years. Thus, 10 units of computer memory become 20 and two years later become 40. In two decades those 10 units multiply at an exponential rate into 10,240… a thousand times higher. Logarithmic growth is the difference between narrow-AI (Apple’s “Siri” or Amazon’s “Alexi”) and deep-AI (Hal 9000 or Ava from Ex Machina) WHY DOES IT MATTER? Sophie the Robot from Hansen Robotics was first activated on February 14, 2016, as a robotic allegory of AI. Her accomplishments as an independent, thinking machine are well documented. She sports “scripting software, a chat system, and OpenCog, an AI system designed for general reasoning”. In other words, she can chat with you on any topic, interpret ideas, and learn from one conversation to the next. AI experts tell us that Sophie is not conscious and is still responding based on a network of algorithms. One expert calculated her level of consciousness at about at the level of a single cell protozoa – hardly the stuff of Terminator. Deep AI is at least 200 years away, or so we are told. I hope they told the Laws of Logarithms. AI IN LAW ENFORCEMENT A colleague recently forwarded research on AI in Law Enforcement and it rekindled memories of those AI White Papers at the tech trade shows from not so long ago. Today they go by titles like “Artificial Intelligence and Robotics for Law Enforcement” and “Artificial Intelligence and Predictive Policing”. They are written by groups like Interpol, the UN Interregional Crime and Justice Research Institute, and funded by groups like the US National Science Foundation, names with considerable gravitas. They take AI in law enforcement and security seriously. They describe new technologies, some of which echo the similar junk science and techno-gibberish I saw years ago. The technologies they describe are mostly narrow AI – voice recognition, simultaneous location and mapping software, patrol drones, and predictive policing. They barely qualify as AI. None reach Sophie’s level of sophistication. So nothing to worry about, right? AI IN POLICING DISPATCH Maybe…maybe not! Consider Predictive Policing. PredPol sends patrol officers to areas that it predicts will become an issue in the future. It uses weekly police calls for service to estimate where crime will happen. But calls for police service only show up in police files when residents call the police – and many minority communities will not call the police for fear or distrust. So areas of high crime, where fearful residents remain behind closed doors, never get police via PredPol since those police units will be sent elsewhere. That’s not exactly fair and equitable police services. To make matters worse, training for Predpol officers does not include what they should do differently when they get to the predicted crime hotspot. For example, if poor lighting is creating vulnerable areas for muggers, patrol officers are not taught Crime Prevention Through Environmental Design tactics to reduce opportunities for future assaults. Thus, if they find no one at the predicted hotspot, PredPol officers simply drive on to the next call. That’s not exactly intelligent policing, artificial or otherwise. PredPol has even been criticized for amplifying racially biased patterns of policing... and all this considers the problems from only one form of narrow AI. Can you imagine the kinds of catastrophes that might unfold if things go wrong with immeasurably more powerful deep AI within law enforcement? A DEAL WITH THE DEVIL Do law enforcement leaders dream that they can somehow control a sentient and fully conscious deep AI system that is immeasurably smarter than they are, linked globally to databases around the world, and capable of out-thinking and out-strategizing them? If so, watch the Academy Award-winning film Ex Machina and see how that turns out. Some very smart people worry about the danger of deep AI – people like Stephen Hawking, Elon Musk, and Bill Gates. And in law enforcement and security, AI is the ultimate Faustian bargain! Is it really an intellectual cache worth cashing in on? |
Details
|
CONTACT[email protected]
|
SafeGrowth® 2007-2025
|
SafeGrowth® is a philosophy and theory of neighborhood safety planning for 21st Century.
|