4 Comments

Dave, you say it's social media history 101 that algorithms are gamed and can't be open due to this. What examples are there of social media companies open sourcing their algorithms? Security through obscurity of closed source software has proven to be a fabrication and open source software can be just as secure. What evidence is there that private algorithms provide any actual advantage, if they're being gamed anyway?

There's nothing that says open source has to be set in stone. Quite the contrary, in fact.

Expand full comment

To clarify, I'm not suggesting that open source is always gamed. I'm suggesting that if there is significant money in strategically gaming the algorithms, then there is *always* a large market for finding and exploiting the weaknesses in those algorithms.

That's true whether its open source or closed source software. But the large platforms have to have some capacity to push algorithmic patches, otherwise the whole thing falls to shit. (First example that comes to mind is Google pushing a patch that ended Demand Media and the other content farms. Plenty of others though. Algorithmic adjustments after 2016 to demonetize Macedonian fake news sites, for instance.)

I wrote on this in a 2012 piece, "Social Science Research Methods in Internet Time." It's in a half-dozen other books from then and before as well, IIRC.

In my 2016 book, I argue for a general principle of approximate transparency. Basic point is that code ought to be auditable by trusted third parties, but complete transparency ruins the system when there's value attached to the ranking mechanism.

Expand full comment

Is the system already working as desired? I think most people, including yourself, would say no. While I won't say that complete transparency could necessarily fix the problems, I do question how you can say it will completely ruin the system.

There are examples of complete transparency tools which haven't ruined the systems. Spamassassin for email spam, clamav for computer viruses, blocklists on GitHub to block data brokers and ad companies. These are all far from perfect, but so will be any proprietary or even partially transparent algorithm.

I agree that any system of value will be gamed for money or power, as that's pretty clear. Where maybe our solutions differ is that I think we should focus on reducing the value that can be gained. Adding partial, or maybe even full, transparency seems like a weak feedback loop to me. Removing the money and power incentive for the social media companies and the ad buyers would be much stronger to me. One way to do this would be to redecentralize the internet away from these large companies that monopolize the internet. Break them up, mandate federation and interoperability, regulate their use of our private data, and give individuals and groups back control.

Pure transparency may improve our ruin the current system, but we wouldn't know until we actually tried it. Though in my opinion the current system is the core of the problem not the foundation to build upon.

Expand full comment
Comment removed
Apr 14, 2022
Comment removed
Expand full comment

“ Incitement to violence, hate speech (adult temper tantrums) and bullying / slandering / doxing / other evils should be vigorously discovered and removed.”

What you’re suggesting here is to restrict speech on Twitter *even more* than it already is. You say “ Free speech must allow for disinformation / misinformation…” and then immediately say that slander should be removed. This is what I don’t get about the “omg, Twitter is so censorious!!!” people: you very clearly *do* want moderation, but you want to personally be exempt from it.

Dave - this was a great piece. Well done.

Expand full comment