A
Annie Saunders
Guest
Rick Song Contributor
Rick Song is co-founder and CEO of Persona.
More posts by this contributor
It’s time for social media and dating apps to face the music and curb fraud, deception and disinformation on their platforms once and for all.
In the beginning, social media and dating apps represented small corners of the internet with just a handful of users. Today, Facebook and Twitter are so big they influence elections, make or break vaccine campaigns, and move markets.
Dating apps like Tinder and Bumble are not far behind, with millions upon millions of people looking to their services to meet their “forever” mate.
But the fun and games are over now. You’ve chosen profit over trust and safety. You have created a gateway for identity theft and online fraud.
Today we all have a friend who’s been “catfished” on Bumble or Tinder; we all have family members who’ve been victimized by online Twitter and Facebook scams. Every day, we hear of new cases where malicious actors steal identities — or create fake new ones — to commit fraud, spread misinformation for political and commercial gain, or promote hate speech.
In most industries, users with fake identities really only impact the business. But when trust is broken on dating and social platforms, it harms users and society at large. And the financial, psychological — and sometimes physical — impact on a person is real.
So who’s accountable for stopping or combating this rise in fraud? Clearly, not the platforms themselves. Although some claim to be taking action.
In the fourth quarter of 2020, Facebook took down 1.3 billion fake accounts. Enough? Not even close. The fact is social platforms and dating apps today do the bare minimum to prevent fraud. While basic AI and human moderators help, they are outmatched by the sheer volume of users.
Facebook says it has 35,000 people checking content; that’s a legion, but it works out to roughly one moderator for every 82,000 accounts. And as bad actors grow more sophisticated by the day, using deepfakes and evolving crime techniques like synthetic fraud, their scale continues to increase. Even savvy online users fall prey to these cons.
Social and dating platforms have come under fire for moving slowly to combat the problem, but what can be done?
It’s not difficult to imagine this scenario: You meet someone online and start a conversation. The person says the right things, asks the right questions. The relationship starts to feel “real” and you begin to sense kinship. Before you know it, it escalates; all your guards are down and you become impervious to red flags. You go as far as calling it love.
You and your new significant other make plans to finally meet in person. They claim they don’t have money for the trip. You trustingly and lovingly send the money, only for this person to ghost shortly after.
While some catfishing incidents resolve on their own with minimal harm inflicted, others — like the example above — can lead to financial extortion and criminal activity. Reported losses to romance scams reached a record $304 million in 2020, according to the Federal Trade Commission.
Actual losses in this underreported area are likely exponentially higher, though, and more so when you count “gray areas” and online begging. Yet most dating apps fail to offer a way to verify identities. Some popular apps — like Tinder —make identity verification voluntary; others, well, offer nothing. Who wants to put friction in the way of a new subscriber?
But voluntary verification just scratches the surface. These businesses must do more to block entry to anonymous and faked identities. Given the damage they inflict on societies and their customers, we as a society must demand they step up.
Romance scams aren’t specific to dating apps; about one-third actually begin on social media. But there are many other reasons to verify identity on social networks. Consumers might want to know if they are engaging with the real Oprah Winfrey or Ariana Grande or some parody account; Winfrey and Grande probably also want that distinction to be apparent.
On a more serious note, pressure is mounting for social networks to control online abusers by verifying identities. In England, the #TrackaTroll movement has gained steam, mainly due to the efforts of British reality star Katie Price. Nearly 700,000 signed her petition lobbying for Harvey’s Law, which was named after her disabled son, who has been heavily targeted by anonymous online abusers.
However, many argue strongly against requiring identity verification on social networks. Usually, they point out that requiring verification can endanger domestic abuse survivors and dissidents in countries with repressive regimes that search out and harm political opponents. Moreover, identity verification would not deter many who spread disinformation about politics or vaccines because they want to be identified to build their audience and personal brand.
Today, Facebook and Twitter offer a “verification” review process that awards authentic accounts with a blue checkmark. But this is far from foolproof. Twitter recently paused its “verification” program because it incorrectly verified a number of fake accounts.
Facebook has done more. The social network has long imposed identity verification conditionally, for example, if a user is locked out of their account. They also base identity requirements on content posted, where certain behaviors, words and images trigger a block of the poster, pending verification and human review.
When bad actors create fake identities on dating apps and social media to defraud and harm others, it damages public trust and undoubtedly impacts revenue for these platforms. Social media platforms wrestle daily to reconcile their business objectives of maximizing usage with protecting user privacy — or face increased regulation and loss of consumer trust.
It’s vital to protect identities from thieves and hackers who would misuse it. Imagine a fake Twitter or Facebook account claiming to be you, spreading hate statements. Without a way to disprove your involvement, you might lose your job or worse.
What choices will the platforms make to protect their users — and their own brand? Their decisions have centered on policy and revenue protection rather than technology. Balancing trust-building measures with privacy concerns and their need for revenue is the grand strategic dilemma they must resolve. Regardless, the burden is on them to create a safe space for its users.
Social media and dating platforms must take greater responsibility when it comes to protecting users from fraud and bad actors online.
Rick Song is co-founder and CEO of Persona.
More posts by this contributor
It’s time for social media and dating apps to face the music and curb fraud, deception and disinformation on their platforms once and for all.
In the beginning, social media and dating apps represented small corners of the internet with just a handful of users. Today, Facebook and Twitter are so big they influence elections, make or break vaccine campaigns, and move markets.
Dating apps like Tinder and Bumble are not far behind, with millions upon millions of people looking to their services to meet their “forever” mate.
But the fun and games are over now. You’ve chosen profit over trust and safety. You have created a gateway for identity theft and online fraud.
Today we all have a friend who’s been “catfished” on Bumble or Tinder; we all have family members who’ve been victimized by online Twitter and Facebook scams. Every day, we hear of new cases where malicious actors steal identities — or create fake new ones — to commit fraud, spread misinformation for political and commercial gain, or promote hate speech.
In most industries, users with fake identities really only impact the business. But when trust is broken on dating and social platforms, it harms users and society at large. And the financial, psychological — and sometimes physical — impact on a person is real.
So who’s accountable for stopping or combating this rise in fraud? Clearly, not the platforms themselves. Although some claim to be taking action.
In the fourth quarter of 2020, Facebook took down 1.3 billion fake accounts. Enough? Not even close. The fact is social platforms and dating apps today do the bare minimum to prevent fraud. While basic AI and human moderators help, they are outmatched by the sheer volume of users.
Facebook says it has 35,000 people checking content; that’s a legion, but it works out to roughly one moderator for every 82,000 accounts. And as bad actors grow more sophisticated by the day, using deepfakes and evolving crime techniques like synthetic fraud, their scale continues to increase. Even savvy online users fall prey to these cons.
Social and dating platforms have come under fire for moving slowly to combat the problem, but what can be done?
Catfishing is serious business
It’s not difficult to imagine this scenario: You meet someone online and start a conversation. The person says the right things, asks the right questions. The relationship starts to feel “real” and you begin to sense kinship. Before you know it, it escalates; all your guards are down and you become impervious to red flags. You go as far as calling it love.
You and your new significant other make plans to finally meet in person. They claim they don’t have money for the trip. You trustingly and lovingly send the money, only for this person to ghost shortly after.
While some catfishing incidents resolve on their own with minimal harm inflicted, others — like the example above — can lead to financial extortion and criminal activity. Reported losses to romance scams reached a record $304 million in 2020, according to the Federal Trade Commission.
Actual losses in this underreported area are likely exponentially higher, though, and more so when you count “gray areas” and online begging. Yet most dating apps fail to offer a way to verify identities. Some popular apps — like Tinder —make identity verification voluntary; others, well, offer nothing. Who wants to put friction in the way of a new subscriber?
But voluntary verification just scratches the surface. These businesses must do more to block entry to anonymous and faked identities. Given the damage they inflict on societies and their customers, we as a society must demand they step up.
On social networks, identity verification can be a double-edged sword
Romance scams aren’t specific to dating apps; about one-third actually begin on social media. But there are many other reasons to verify identity on social networks. Consumers might want to know if they are engaging with the real Oprah Winfrey or Ariana Grande or some parody account; Winfrey and Grande probably also want that distinction to be apparent.
On a more serious note, pressure is mounting for social networks to control online abusers by verifying identities. In England, the #TrackaTroll movement has gained steam, mainly due to the efforts of British reality star Katie Price. Nearly 700,000 signed her petition lobbying for Harvey’s Law, which was named after her disabled son, who has been heavily targeted by anonymous online abusers.
However, many argue strongly against requiring identity verification on social networks. Usually, they point out that requiring verification can endanger domestic abuse survivors and dissidents in countries with repressive regimes that search out and harm political opponents. Moreover, identity verification would not deter many who spread disinformation about politics or vaccines because they want to be identified to build their audience and personal brand.
Today, Facebook and Twitter offer a “verification” review process that awards authentic accounts with a blue checkmark. But this is far from foolproof. Twitter recently paused its “verification” program because it incorrectly verified a number of fake accounts.
Facebook has done more. The social network has long imposed identity verification conditionally, for example, if a user is locked out of their account. They also base identity requirements on content posted, where certain behaviors, words and images trigger a block of the poster, pending verification and human review.
The identity arms race
When bad actors create fake identities on dating apps and social media to defraud and harm others, it damages public trust and undoubtedly impacts revenue for these platforms. Social media platforms wrestle daily to reconcile their business objectives of maximizing usage with protecting user privacy — or face increased regulation and loss of consumer trust.
It’s vital to protect identities from thieves and hackers who would misuse it. Imagine a fake Twitter or Facebook account claiming to be you, spreading hate statements. Without a way to disprove your involvement, you might lose your job or worse.
What choices will the platforms make to protect their users — and their own brand? Their decisions have centered on policy and revenue protection rather than technology. Balancing trust-building measures with privacy concerns and their need for revenue is the grand strategic dilemma they must resolve. Regardless, the burden is on them to create a safe space for its users.
Social media and dating platforms must take greater responsibility when it comes to protecting users from fraud and bad actors online.