AlgoRail in Spain: How algorithmic forecasting can help decrease gender violence

The use of VioGén that assesses the risk of gender violence in Spain since 2007 has shown that a well-staffed and specifically educated police is vital for its success. At the sixth stop of our AlgoRail through Europe, Michele Catanzaro reports how the algorithm has been continuously refined to help Spanish women and children stop suffering.

In the early morning of 24 February 2018, a Spanish psychologist went to a police station to report threats from her husband. According to her statement her husband had broken the buggy of their smaller child and slapped the older one. After asking the woman a set of questions and feeding the answers to VioGén, a software that helps the Spanish police estimate the risk of recidivism in gender violence, the officer issued a report in which the risk was deemed as low.

Critical failure

A judge denied her request that her husband be forbidden to visit their children, based also on the low risk estimation made by the police. Seven months later, the husband killed their kids “with cruelty” and threw himself out of a window. The shocking story left people wondering why the case was deemed as low risk? VioGén had missed its goal of supporting police personnel in assessing the risk of new assaults, and so assigning the right level of protection. Since the software was first deployed in 2007, there has been a series of “low risk” cases that have ended in homicide of women or children.

Better than nothing

Still, the program is by far the most complex of its sort in the world and has reasonable performance indexes. Nobody believes that things would be better without it. But critics point out some flaws. Few police personnel are educated in gender-based violence. Moreover, the program may have systematically underestimated risk. Some victims’ organizations believe that the possibility of a low risk score is nonsense. Reporting to the police is a high-risk situation in itself, they say, because abusers perceive it as a challenge.

Reporting an assault

When a woman goes to report an assault from an intimate partner, she triggers a process, in which first, the police agent goes through an online form with her. Questions explore the severity of previous assaults, the features of the aggressor, the vulnerability of the victim and aggravating factors. Answers are thrown into a mathematical formula that computes a score, measuring the risk that the aggressor repeats violent actions. Whilst it is known that the algorithm gives more weight to items that empirical studies have shown to be more related with recidivism, the exact formula is not being disclosed.

Keeping the score

In theory, Spanish agents can increase the score by hand, if they appreciate a higher risk. But a 2014 study found that they stuck to the automatic outcome in 95% of the cases. Once a case’s score is established, the agent decides on a packet of protection measures associated to that level of rispredictivek. The police meet again with the woman to fill in a second form, in order to assess whether the situation has worsened or improved. This happens periodically, more or less frequently depending on the risk level. Police stops following up only if judicial measures are not pursued and the risk level falls below medium.

The best available system

VioGén is the best device available to protect women’s lives, according to Ángeles Carmona, president of the Domestic and Gender-Based Violence Observatory of the Spanish General Council of the Judiciary. She recalls a case she saw in a court in Seville, of an aggressor that had a high-risk of recidivism, according to VioGén. A control wristband was applied to the man. One day, the police saw that the signal of the wristband was moving fast towards the victim’s home. They broke into it just in time to prevent him from suffocating her with a pillow.

It’s impossible to know how many lives have been saved thanks to VioGén. A widely-used measure of performance for predictive models is called Area Under the Curve (AUC) and a 2017 study that tried to measure how good the system was, calculated values between 0.658 and 0.8. An AUC of 0.5 is as good as a coin’s toss and an AUC of 1 means the model never fails. In other words, VioGén works. Comparing VioGén with other instruments assessing the risk of intimate partner violence one can conclude that it is among the best things available.

Ignored requirements

In 2017, there was a total of 654 agents in all Spain belonging to the Women-Children Teams of the Guardia Civil, much less than one for every police station. This is very different from what the 2004 law that created VioGén required. According to it, cases should be dealt with by an interdisciplinary team including psychologists, social workers, and forensic doctors. Several teams were created after the law was passed in 2004, but the process was cut sharply by the austerity following the 2008 financial crisis.

VioGén 5.0

A new protocol was put in place in March 2019, the fifth big change VioGén has gone through since its first deployment in 2007. Now, the program identifies cases “of special relevance”, in which the danger is high, and cases “with minors at risk”. It is also not being disclosed how the new scale was built, but it was based on a four-year study to find which factors were specifically related to cases that end up in homicides. The new protocol seems to have triggered a major shift in the risk scores of VioGén: The number of extreme risk cases rose and those of high risk almost doubled.

That’s it for this sixth stop of our AlgoRail through Europe, on which we want to learn more about how algorithmic systems are used in our European neighborhood. Next week we will cross the country and continue to Portugal.


This story was shortened by Julia Gundlach. The unabridged story was published on the AlgorithmWatch website.

The blog series AlgoRail is part of the Automating Society Report 2020 by Bertelsmann Stiftung and AlgorithmWatch, which will be published this fall and is coordinated by Dr. Sarah Fischer. In addition to journalistic stories like this one, the report gives an overview of various examples of algorithmic systems as well as current debates, policy responses and key players in 15 countries. A first issue of the report was published in January 2019.


This text is licensed under a Creative Commons Attribution 4.0 International License



Comment