Judgment Approved by the court for handing down.

R (Bridges) -v- CC South Wales & ors

take place, there must be two human beings, including at least one police officer, who
have decided to act on the positive match.
185.

We do not consider the “human failsafe” is sufficient to discharge the PSED. As a
matter of principle, it is not material to the PSED, which as we have observed, is a duty
as to the process which needs to be followed, not what the substance of the decision
should be. Secondly, as was acknowledged at the hearing before us, human beings can
also make mistakes. This is particularly acknowledged in the context of identification.
We would note the well-known warnings which need to be given to juries in criminal
trials about how identification can be mistaken, in particular where a person has never
seen the person being identified before: see R v Turnbull [1977] QB 224. Further, and
in any event, this feature of the present case does not seem to us to go to the heart of
the Appellant’s complaint under Ground 5, which is that SWP have not obtained
information for themselves about the possible bias which the software they use may
have.

186.

The second matter which impressed the Divisional Court was the witness statement of
PC Dominic Edgell, who found that there was virtually no difference in the statistics as
to race or gender. We have considered that evidence.

187.

Mr Edgell reviewed the AFR Locate deployments from after the UEFA Champions
League Final of 2017 through to June 2018. During those deployments 290 alerts were
generated. 82 were true positives and 208 were false positives. He says that it is
important to note that these statistics are only of the persons who have generated an
alert. The identity of those who passed the camera without generating an alert is
unknown.

188.

188 of the alerts were males (65%). Of the 188 male alerts, 64 (34%) were true
positives and 124 (66%) were false positives. In relation to females, of 102 alerts, 18
(18%) were true positives and 84 (82%) were false positives. A number of the female
false alerts were matched against primarily two individuals who the AFR software
provider would refer to as a “lamb”. A lamb is a person whose face has such generic
features that may match much more frequently.

189.

Mr Edgell also reviewed the ethnicity of those who were the subject of an alert. Of the
true positives (82) 98% were “white north European”. Of the false positives (208)
98.5% were “white north European”.

190.

Mr Edgell therefore concludes, at para. 26:
“From my experience and the information available to me, I have seen no bias
based on either gender or ethnicity. …”

191.

In our view, this does not constitute a sufficient answer to the challenge based on the
PSED. As Mr Squires submitted, Mr Edgell was dealing with a different set of
statistics. He did not know, for obvious reasons, the racial or gender profiles of the
total number of people who were captured by the AFR technology but whose data was
then almost immediately deleted. In order to check the racial or gender bias in the
technology, that information would have to be known. We accept Mr Beer’s
submission that it is impossible to have that information, precisely because a safeguard

Select target paragraph3