Most big name social media companies work, generally, the same way:
- Provide a platform that:
- Makes it easy to generate content
- Allows users to connect to others
- Provides a feed for each user to consume
- Harvest the content generated across the platform to create a feed specific to each individual user. Optimise the user’s feed to maximise ad revenue.
- Repeat over and over.
There are a few things to notice about this process:
The social media company can improve how users generate content. Content generation can become easier, or more varied.
The process lends itself to experimentation. The platform can experiment on a subset of users with a feature, only to roll it out to others later.
If a user’s feed is optimised for maximum ad revenue, either engagement time or content can be adjusted, assuming some content is more ad-revenue-enhancing than other content.
The source of the content does not appear to matter. The content may or may not be: factual; reliable; from the user’s network of connections.
Notice that the first two observations are rather benign. Experiments are fine. Easier and more varied content generation is fine too.
But the other two observations can be quite a social challenge. In fact, once they are said aloud, it is easy to think of many examples of each one being a well-known issue. We’re all familiar with Fake News. And, after all, couldn’t less commercial content be important? Considering that increased use of social media is correlated with poorer mental health.
Zeroing in on Content Sourcing
If a social media platform were to begin vetting sources, how might they go about it? One idea: limit profiles to actual people posting normal things (i.e. eliminate the use of fake profiles and people that are a part of influence campaigns). It could be implemented with a series of policies:
- Hibernate unused accounts (so they are effectively removed from the social graph).
- Hibernate accounts with bot-like activity (such as suspicious post rates/frequencies, geographical metadata, etc.).
- Hibernate accounts that are frequently flagged for objectionable content. (I recognise this one is trickier.)
As I see it, these kinds of policies risk negatively impacting ad revenue or stock value.
I’m curious, is there public data that shows the impact of such experiments? In other words, has anyone experimented what happens to ad revenue if suspicious activity is limited?
What If the Company’s Value Wasn’t Based on Advertising or ‘Number of Active Users’?
Now imagine you run a social media company and you’re faced with all of the issues associated with modern social media companies. What do you do about it?
You might wonder if it were possible to reshape the organisation to make money from something like subscriptions, financial transactions or e-commerce. If so, you reason, the risks associated with the legacy (ad-based) business would be irrelevant.
Are there any social media companies actively exploring other business models? The only one I know of is the big-blue-one-that-shall-not-be-named.
I wonder if they’ve done the research about user generated content.