With the spread of literacy, the presence of someone familiar with the individual became unnecessary. Descriptions helped identify the person, with more detailed descriptions yielding better results. For instance, it wasn’t enough to say the person had a nose in the middle of their face; it was better to describe its slant or if the eyes were at the same height.
As portraiture developed, transmitting images became possible. However, it didn’t guarantee flawless identification. The similarity between the reference material (portrait) and the person often depended on the skill of the portraitist. There’s even a legend that Henry VIII had difficulty recognizing one of his later wives when comparing her to the portrait he had received earlier.
It wasn’t that Hans Holbein the Younger, the portraitist, lacked skill. Rather, he likely added his subjective view of what beauty should look like.
Facial recognition is looking for its place in Estonia
Regardless, to this day, comparing an image with a person remains one of the most common methods of identification. Over time, various tools have been added — combining hardware and software.
Estonia tends to observe international trends rather than proactively discuss important issues, primarily due to its small size (both by the area and population). This is for the simple reason that there is still no direct need and no possibility to compare.
This is also true for real-time facial recognition. No public discussion has yet occurred on whether such technologies should be implemented and what risks or benefits they might bring. Likely, this is because the need has not yet arisen. Facial recognition is used more in private relationships than in law enforcement, used primarily for one-time identification to access online environments. However, facial recognition alone is insufficient for executing orders online, where additional identification means are typically required.
Facial recognition in elections has been explored. Particularly for e-voting rather than traditional in-person voting. A study conducted in 2021 found that applying facial recognition in e-elections would introduce complications, and the issue has not been further developed.
The study noted several challenges that may arise when applying facial recognition in e-elections, including some of them that could be encountered also in real-time facial recognition:
- „The error rate of facial recognition can never be zero, as biometrics-based authentication is heuristic. The number of false positives and negatives depends on the threshold set for identification.“
- „Adding facial recognition to Estonia’s e-voting protocol would require both protocol complexity and additional hardware, making the process more error-prone and less user-friendly.“
Voting is typically a short-term activity where the voter doesn’t move around the room. However, there’s no guarantee that the same person completes the voting process from start to finish. Using video for real-time facial recognition would be even more challenging, as the person being identified likely won’t be standing still or looking directly at the camera.
Surveillance Society: What is it?
We can debate whether we are already in a surveillance society or on the way there. There is no clear definition of what constitutes a surveillance society. Is it a society where technology enables monitoring everyone, but its use is strictly regulated by law? Or is it a society where such monitoring occurs without legal boundaries?
Since data covering all individuals is not yet used as a reference base, we have not yet reached a state of total surveillance.
Even the use of video surveillance has not triggered a shift to a surveillance society. Technological advancements, like photography and the telegraph, enabled rapid recording and transmission of images.
Law enforcement agencies must perceive their responsibility
Many factors must be considered before deciding whether to implement real-time facial recognition alongside video surveillance in a public space. For example:
1. Why use facial recognition out of all possible anthropometric measurements? Just wearing glasses and/or a face mask is enough to confuse.
2. What is the reference base? Should the reference base only consist of photographs of wanted persons, or persons suspected of serious crimes, or anyone suspected of anything?
Considering the history of the Republic of Estonia, one might think that we have strictly limited activities of law enforcement agencies in monitoring people and their employees refrain from actions that, in the smallest degree, may infringe on a person's rights and freedoms more than what is allowed by legal norms.
However, the reality is quite different. Whether we live in a surveillance society or a democratic one, it doesn’t guarantee that everyone's rights are protected, even by institutions set up to protect those rights. By the law enforcement agencies, too.
Mistakes in personal identification are a major hazard
In addition, the desire to use real-time facial recognition in private relationships has also appeared on the agenda. At the moment, though very narrowly in the field of shoplifting. At the same time, the statement that automatic facial recognition is not allowed in Estonia without a person's consent is not quite correct. At least the General Data Protection Regulation (GDPR) does not contain such a categorical prohibition.
False positive results continue to be a problem in the case of automatic personal identification in private legal relationships as well. In an article published by the BBC on May 26, 2024, one shop-goer described how he was sent out less than a minute after entering the shop with the words "you are a thief, you must leave the shop!".
According to the same article, London's Metropolitan Police reported that one in 40 identifications has been a false positive. In the same article, a person who received a false positive by the police described what for him it meant being fingerprinted and detained for 20 minutes: "I was treated as guilty until proven innocent."
Therefore, the introduction of real-time facial recognition must be based on the assumption that the result will be a false positive, and everyone involved must be trained accordingly. The biggest problem, however, is that since the decision made by the device may not be 100% accurate, someone should review them and make a final decision. That's what a person is for. Does anyone dare to assert that the result of a particular decision-maker is always objectively correct.