

By Paul Holland, CEO at Beyond Encryption
Mid-February saw the introduction of Meta Verified – a subscription-based model similar to Twitter Blue that allows Instagram and Facebook users to ‘establish and grow’ their online presence. The bundle will see subscribers undergo an authentication process to receive a verified badge, but what are the privacy implications that surround this move? Is Meta capable of creating and enforcing the change necessary to protect individuals from online threats?
To answer these questions, we must first take into account why Mark Zuckerberg has decided to follow in Elon Musk’s footsteps. One clear reason is Meta’s declining income, which fell 52% in the third quarter of 2022. This dramatic drop can be attributed to reduced ad spending in the face of the recent economic downturn, leaving Meta with significantly lower advertising revenue.
Critics say that the subscription, which costs users between $11.99 and $14.99 a month, may just be an attempt to diversify revenue streams and move away from a reliance on advertising. However, when considering the features that Meta Verified offers, some say that it could be the stepping stone towards a safer environment for social media – if done correctly.
The majority of social platforms now suffer from an inundation of bots and other fake accounts, which are often used to spread false information, manipulate public opinion, or trick users into revealing personal information. With digital safety being a key issue in today’s modern world, many users have long been calling for verification solutions, such as providing a government ID, to combat the issue. Meta claims that the new subscription will make users safer on Facebook and Instagram, providing them with the reassurance that the people they’re interacting with ‘are who they say they are’.
But as major tech players and governments across the globe look to more effective ways to regulate online spaces, from the UK’s Online Safety Bill to the EU’s Digital IDs, we must consider whether Meta is the right institution to be leading the charge.
One of the most fundamental reasons they may be unsuitable to do so is the very concept of paid-for verification, creating a two-tiered system in which only those who can afford the monthly fee are granted the protections that are needed by all. A survey by the Identity Theft Resource Center reveals that in 2022, 85% of respondents had their Instagram account compromised and 25% had affected Facebook accounts, showing that the majority of Meta users are being affected by security issues. Going forward, attempts to create safer online environments must be carried out with inclusivity and universal interest in mind. Otherwise, we risk leaving a significant number of digital users open to risk.
We must also question the sheer amount of data that Meta will be responsible for when undergoing verification for its users, and whether they can be trusted to hold this information. Previous incidents such as the Cambridge Analytica scandal, where the data of 87 million people was harvested without their consent, suggest they may not be.
A Survey from The Washington Post reveals that Meta-owned social media platforms are one of the least trusted services, with 72% of respondents saying they do not trust Facebook to responsibly handle their personal information and data, and 60% saying the same about Instagram.
However, research has revealed that more than two-thirds (68%) of consumers worldwide are worried about the government using their data, highlighting a lack of consumer trust across leading organisations and institutions. If we are to succeed in creating safer online spaces, we must find a way to build trust without impinging on consumer privacy or needing centralised bodies to hold a large amount of personal data.
One proposed method is crowd authentication, which allows consumers to maintain privacy while still facilitating verification. The process includes analysing data from a network of interactions, such as the nature and frequency of digital messages and transactions, to apply a value to each connection. This score will provide users with assurance that the parties they are dealing with are legitimate and authenticated.
Currently, Meta is using a reduced variation of this as part of its verification model, with users having to meet minimum activity requirements, such as prior posting history, to be eligible. But this on its own is not enough and must be embraced further to minimise the need for transacting large amounts of personal data. This not only minimises risk but enables us to create online spaces that are safe and trusted.
Meta Verified, as it currently stands, represents a series of challenges and concerns, with services such as this set to develop and grow in the next few years. However, we must remember that organisations that are given the capability to possess large amounts of personal data have a huge responsibility to consumers. Do you trust Meta, or any other social media platform, to keep your identity safe? With the answer most likely being no, it appears that we need a drastic rethink of how we approach digital security in the future.