Identity: The Social Media Dilemma
Recently there has been a great deal of news regarding various aspects of over and under-reach by social media companies with regards to their target demographics. Whether it be support of toxic communities that foment and engage in actions damaging to civic society, to targeting population groups that are vulnerable to abuse (such as the young, very old, and key minorities). Additionally, there are a number of platforms that struggle with coordinated inauthentic behavior, especially in terms of the user space where bots can wage damaging campaigns or facilitate leadership of toxic groups. The unspoken challenge to combating these types of campaigns is that by completely shutting them down, many social media companies will (or fear) a considerable loss of revenue, particularly if they appear to be censoring a certain political or social group.
The answer, as alluded to in the title, is identity management and verification. Now, this *will* cause a loss in revenue. That is unavoidable though mitigatable in the long run as the discourse over the various platforms changes to one that is more trustworthy. Furthermore, by addressing inauthentic actors via a staged, rigorous authentication process, a platform will strain out bad actors and can address vulnerabilities in their process as they move along.
Envisioning this is not novel, as a number of financial sites also engage in a similar process, though one critical key to this would be the protection and management of that identity (to include detection of deceased users to prevent abuse from that account).
Echoing the staged approach mentioned above, and envisioning a platform a la Facebook or LinkedIn (though this also applies for other platforms in which anonymity is a feature of the platform, i.e. Reddit or Imgur), initial users would join with the ability to create their own account, post on their page, post on group pages to which they were a member, as well as leave product reviews etc. The key difference is that their posts would fall to the back of the prioritization heap for any sorting algorithm as they are currently unverified.
A stage 1 verification is conducted upon user request, which should include some default privacy settings (protecting that user’s identity unless they manually disable the settings). This verification involves an automated open source check against databases in that country/area with an additional check against other users. An automatic flag is raised if the algorithm detects a duplicate user or a user that doesn’t exist in the open source check. For example, if Jane Smith with X phone number and Y address registers for an account that already exists with that number/address, a customer support ticket is generated and investigated. This could very well be innocuous as Jane Smith could be the daughter of Jane Smith and have the same home phone number and address. Alternately, the new Jane Smith could be legitimate and the previous one could be a foreign bot that used open source intelligence to generate a persona. These escalated tickets (and the assumption is that there could be quite a few!) then get investigated and actioned. Once a user is verified, two-factor authentication is required for their account and a unique token is generated for that user, which then tracks the user’s actions throughout the company’s platform space and is viewable to the company and the user themselves. A user should also be able to decide then if they wish to sell their data through the token, which permits ethical monetization of their data according to parameters for the company, but also provides a venue for the user to benefit from the sale of data, incentivizing both parties. The biggest hurdle to this is the investigation piece for eliminating bots and stolen identities, as that will end up being a personnel intensive activity.
Moving on to the stage 2 verification, for which the users undergo a more rigorous open source screening as well as a chat-bot based authentication via the two-factor authentication application. The purpose of the chat-bot authentication is that it would serve as a limited Turing Test to weed out accounts that have managed to pass the stage 1 authentication. It goes without saying that through this process, regular investigation for inauthentic accounts based on activity should continue as the efforts to clean up the online space would always be a priority. Upon stage 2 verification, the user’s posts are escalated in the prioritization algorithm and the users are permitted to be the manager of groups and organizations within the platform. For some of the other platforms (i.e. Reddit) this could be the stage required to perform moderation duties for a subreddit. As a result of the stage 2 verification, the annotation is made on their token, and the behavior captured their is semiregularly screened (unless this feature is turned off by the user), with an alert generated for abnormal behavior.
Lastly we move to stage 3 verification, whereby the user is given the highest level of prioritization and access to the platform, and the final verification is conducted by a human. While this can provide for the strongest verification, the process is still fallible (and labor intensive) and I would assume a backlog would be generated rather quickly. The flipside is that data sold from these accounts could generate the highest premium for the user, and the company benefits as the data generated is much high quality given the authentic nature of the data. Celebrities, influencers, and public figures would all be some of the primary individuals engaging in this type of verification (a la Twitter’s verified identities). This verification however, should be made available to all users, as it leads to a different sort of platform.
The results and ramifications of these verification levels are multifold. Firstly, by the 3rd verification level, bots will have been screen out especially as the three level process will be too labor intensive and done in such a way as to make scripting the process extremely difficult. This can be accomplished with defensive programming techniques such as changing the other of the forms required to sign up, changing the size and shape of the form boxes and backgrounds, as well as using synonyms to generate the forms, compounding the requirements for a bot or algorithm to try to pass the forms. Also, as such identity becomes common place, I foresee that platforms will need to spend less time on censoring and managing content (except in terms of violent hate speech) and can allow users to be regulated by accountability culture, as “naming and shaming” can have disastrous effects on someone’s livelihood. One might argue that this is a failing, as opposed to a feature and I disagree. Personal accountability should extend into the online space. Actioning harassment and other forms of abuse become easier as well, as the tokenized nature of the identities results in easy tracking within a company’s platform (of course, this requires that the policies of the company itself be focused on targeting harassment and abuse, and not simply paying lip service to that end).
Identity verification and management is one of the primary ways we can restore the online realm to some semblance of decency, hold users accountable for their actions, and prevent massive campaigns of disinformation from swaying the social fabric. While this concept has costs, both in terms of capital and social leverage, the final result of a better online ecosystem represents a long term approach that I feel social media companies must begin to prioritize.