There is more content available to consume than ever before, and long ago we reached the point where there is more content available than we can ever consume in a lifetime. There is so much content available that we are not even able to assess what we should consider consuming. Our brains are not equipped to handle it. Therefore, our solution has been to use machines as an intermediary between the creator and the consumer. A machine will decide what should be consumed under certain circumstances. In effect, this puts the machines in the role of gatekeeper.
This can be problematic because today’s machines serve this function rather crudely. They run algorithms that humans have written and continue to tweak. Content goes into the algorithm and then a recommendation on what is important or relevant is returned as output. The Facebook algorithm is particularly famous for this. There is more content published on Facebook than you can consume, and you are probably not interested in most of it anyway. Therefore, a machine will curate it for you and display what it wants you to see. Sometimes its goals may not align with yours. For instance, the goal of a social media algorithm may be to show you content that keeps you happily engaged with the platform as long as possible. That may or may not be what you want at the highest level.
Google is another very important content curating machine. Its algorithm does its best to not only find potentially relevant content for your searches but also decides which of those results are “best” and ranks them accordingly. There is no possible way we could sort through everything returned in a search. For most practical purposes, the fourth item returned on the thirteenth page of results might as well not have been returned at all. Most humans will focus on the first three or four items returned. We are at the mercy of what the machine thinks we should see. Generally speaking, we may be happy with what is returned, but at least a part of that is because we don’t know what wasn’t returned. Of course, without the machine intermediaries, using the internet would be nearly impossible. We would never be able to find what we need.
This problem could potentially lessen at some undefined future point if/when there are full-fledged artificial intelligences that replace algorithms for this service. Those machines would theoretically have access to all available information, understand that information (and not just do keyword matching, for instance), and combine that with a very personalized understanding of what the person likes, wants, or needs in a given circumstance. Creators with very far-ranging plans may take comfort in this, but, for now, we have to work with what we have. That means working with imperfect algorithms. That means designing content that will satisfy machines in order to get what we created passed on to other humans. If the machine does not “like” what you have created, then it will not pass it along to other humans. That makes it dramatically less likely other humans will find it and enjoy it.
I’ve noticed where people have commented that they feel blog posts on certain websites seem like they are written for machines. They probably are. They are probably engineered to be what machines like, and deliberately created to those specifications. The entire field of “search engine optimization” (or SEO) is born out of the need to get machines to like the content that humans create. It is all about engineering content to be “liked” by computers, whether this is Google, Amazon.com, YouTube, Facebook, or another platform where you must convince a machine that your content is worthwhile before it will share it with other humans. If the machine likes what you have made, it will rank higher and then will be seen by more humans. It means your content must first satisfy the machines requirements for what is good content and what is not. That is just the way the game is played.
But, this is a game with changing rules. The machines sometimes change what they like. This happens when algorithms are updated. Content the older algorithm may have liked now falls out of favor and new content replaces it. Suddenly the machine is punishing behavior it used to like and rewarding other behavior, sending some content creators into a tailspin. The humans have to relearn and adjust to the rules of the new gatekeeper.
The idea of a machine as gatekeeper is an interesting one because we so commonly say that technology has allowed us to “go around” the gatekeeper in so many industries. It is no longer important if a certain human at a particular publishing house likes your work because you can publish yourself. But now, the gatekeeper is the machine that will decide whether what you have created is worth putting in front of other humans. Yes, there are still some ways we can use technology to directly share content with other humans without a machine gatekeeper, but as our lives fill with more and more content we are increasingly reliant on machines to separate what you regard as valuable content from the noise.
So, where does that leave us as creative individuals? It means we need to give at least some thought as to how our content will be received by machines. “Tripping” an algorithm can be the difference between the success and failure of a project, if, suddenly a machine makes what you have created particularly visible in some ranking or another. The machine likes it, now other humans will see it. Will machines like what we are creating? If they don’t, our creations may have increasingly little hope of being enjoyed by humans.