Face App: Recipe for ID Theft
In recent days, my Instagram and Twitter feeds have been inundated with photos of people I know, who suddenly look much older than I remember them.
These friends have brought into the Face App craze.
Face App applies the computing power of artificial intelligence to recreate human facial features in a way that adds (or subtracts) years of natural ageing. The results are either remarkable or, in some cases, creepy.
Here’s the thing, though. To use the app you must upload a current image of yourself to the Face App servers. As with some other photo-sharing services, that brings risks to privacy and personal security of which most users are unaware.
Face App, however, takes these risks to a higher level.
The terms of use statement for the Face App service are quite clear on the following. (These are not quoted directly here, but the original terms have been widely reported).
The user’s original photo - and its doctored version - can be used by the people behind Face App, for any purpose they choose. They can do this without asking for the user’s consent, or notifying them, or sharing with them any profit arising from the use of their images.
What’s more, the company can display users’ images via any media, including forms of media that haven’t yet been invented. They can also display usernames - and accompanying real names - in said media.
All of this, say the terms and conditions, applies in perpetuity. This should make even the most cloud-friendly AI advocate wary.
To add to the unease, the company behind Face App is Russian-owned.
Now, Russia - as a state or a business environment - has no monopoly on the use of the internet in underhand ways.
Yet there is ample evidence of sometimes unusual levels of Russian-based interference in online activities such as foreign elections, corporate espionage and more.
Recently, governments in the US and Europe have moved toward greater levels of regulation on companies like Facebook, which have often treated users’ data in a cavalier way.
Facebook has a great many faults. I have advocated publicly that individuals delete their Facebook accounts. Yet Facebook’s major headquarters at least operate within systems that encourage official scrutiny and regulatory action against privacy infringements.
The recent $5 billion settlement agreed between Facebook and the US Federal Trade Commission, serves as an example of punishment for errant behaviour.
Software companies with global aspirations are not necessarily subject to that level of scrutiny in the wild west that is the Russian corporate scene.
Face App seems like innocent fun, but its terms of use raise the worrying possibility of identity theft.
Stealing an identity is relatively simple these days. It can, said one study, be accomplished for about the price of a can of Coke.
A key feature of the process is the collection of photos and names of real people, which are then utilised to produce new backstories and identities.
These are sold to people who, for whatever nefarious reasons, don’t want to operate publicly under their given names, or with their personal histories attached.
Of course, supplying images to any photo-sharing site opens up the possibility of identity theft. This ought to concern every parent who posts photos of children or teens to Instagram and the like.
Face App, however, takes this to a new level because of its insistence that it can use images anywhere and at any time, even on media not yet devised.
Rapidly emerging AI-driven technologies flag all kinds of potential problems for privacy and personal security. Regulators struggle to keep up with the technology and its potential for misuse and abuse.
In light of all of this, Face App represents a major case for adopting the principle of caveat emptor.
|