Are We Dominated By Algorithms?
We are living in a world powered by AI and algorithms. From insurance premiums to police resource allocations to rocket launches, algorithms are working behind the scenes to determine the best outcomes. But what exactly are algorithms, machine learning, and artificial intelligence?
Algorithms are automated instructions to solve specific and discrete problems. They are the computational building blocks. Both machine learning and artificial intelligence involve groups of algorithms, but they have different mechanisms. Machine learning utilizes structured past data to predict future outcomes, while artificial intelligence deals with unstructured data and tries to solve problems in ways humans do. More than often, people use the term "algorithms" as an oversimplified definition to describe all of the above.
How algorithms could benefit everyone
Algorithms can be found in everyday tools, such as laptops, calculators, or dishwashers. Our society has adopted machines and algorithms because of their productivity. And in return, they have freed us from many types of repetitive work. But the automation of labor isn't the only function of algorithms. The true power of algorithms is the automation of intelligence, where algorithms in advanced software systems make decisions on our behalf. For the average person, there are three main areas where algorithms shine:
1. High-quality search results
If the Internet was a giant library, then Google would be the world's smartest librarian. No matter what the user is looking for and how vague the question might be, Google sifts through a number of parameters and finds the closest webpage. This is a simple yet profound use case of Algorithms, as it has made the Internet accessible, efficient, and more powerful than before.
2. High-accuracy predictions
When analyzing huge datasets using advanced algorithms like neural networks, computers can make extremely accurate predictions. A great example would be Amazon's anticipatory shipping system which looks at customers' shopping histories and user behaviors to ship out products before they place an order. As a result, customers could expect their items delivered sooner than ever.
3. High-relevancy recommendations
For some, Netflix's movie recommendations first come to mind, but the recommender system has been used in a variety of digital services: From which restaurants we might want to try on Yelp to which outfits we might be also interested in on Zara to which accounts we would like to follow on TikTok, algorithms make our lives easier by making tailored suggestions.
How algorithms could harm consumers
The commercial applications of algorithms are far from perfect, but they are being improved at very fast rates due to the financial incentive and fierce competition. The real issue with these algorithms is not the accuracy but the lack of transparency, or what some people referred to as the "black box problem". Major tech companies do not disclose their algorithms because they are claimed as intellectual properties. Given the sophistication of their algorithms, it's not unreasonable that they would want to protect their proprietary tech. But at the same time, this claim also excused them from scrutiny.
After the Cambridge Analytica scandal in 2016, the general public finally woke up to the true power of social media algorithms. It was hard to believe that our beloved social media platforms had greatly exacerbated issues like political disinformation, extremism, and mental disorders, all of which were well documented in the Netflix documentary films The Great Hack and The Social Dilemma.
But from a purely algorithmic standpoint, the content recommendation system in social media networks has serious flaws because they are built on the metric of engagement. The more engagement a piece of content has, the more likely it will be recommended to other users. It sounds pretty fair, but is there always wisdom to be found in the crowds?
Not always. Research showed that social media traffic tends to exhibit two biases: 1. homogeneity bias, the tendency to consume content from a narrow set of sources. 2. popularity bias, the selective exposure to content from top sources. Further study on popularity bias showed that people are more likely to like and share a low-quality piece of content and less likely to fact-check it as it gains popularity. What is concerning about these studies is the real dangers of social media they pointed out: our limited exposure to diverse points of view and vulnerability to manipulation by disinformation.
How algorithms could damage society
The real-world application of algorithms doesn't stop at the consumer level, and there has been more and more utilization of algorithmic tools on a societal level:
Last year, nonprofit newsroom The Markup reported that over 500 universities in the U.S. use predictive models to evaluate their students by assigning an academic "risk" score. Some of those universities use race as a "high impact predictor", and black students are consistently rated at higher risk of not graduating than their white peers. This also causes concerns that black students are being pushed out of math and science majors.
According to a Harvard Business School study, almost all fortune 500 companies use resume-filtering software for recruiting. During the application process, millions of qualified job applicants were rejected due to failures to meet certain initial criteria, such as a one-year gap in employment or long-term parental leave. The software also fails to assign value to other kinds of life experiences that might contribute to an applicant's qualities.
Beyond education and hiring, algorithms are also widely used in government agencies as they modernize and automate their processes. According to a 2020 report from Stanford and New York University, more than 40% of U.S. federal agencies have experimented with AI; however, only 15% currently use highly sophisticated AI. This is concerning as less sophisticated AI will likely lead to lower accuracy, which might further impact the disadvantaged and vulnerable groups.
The causes of algorithmic discrimination
Advanced AI systems are trained on very large datasets, from which we would expect objectivity and neutrality. So, why do these AI systems sometimes make "racist" or "sexist" judgments and predictions? A 2019 report by independent research firm The Brookings Institution pointed out two primary causes: historical human biases and incomplete/unrepresentative data.
The U.S. has a history of discrimination toward racial and ethnic minorities, and many of these minority groups have a lower economic standing. When algorithms train with historical data, the same racial, gender, and class disparities could be perpetuated or even amplified. Over time, these minority groups could find themselves trapped in a negative feedback loop.
On the other hand, insufficient data also creates challenges for algorithms to produce fair results. Imagine low-wage workers who don't have a smartphone, bank account, or social media profiles, they are automatically excluded from the databases of the major AI software, such as credit score and applicant tracking systems. The inequality here comes from the AI tools' lack of precision.
It is also not straightforward to collect more data and make algorithms fairer. First of all, there is a struggle between data collection and privacy protection. On the one hand, the more personal data we give away, the more personalized digital services we'll receive. But the same data could also be used for targeted advertising, news, and political campaigning, not to mention the increased risk of privacy invasion and identity theft.
Even if data wasn't a problem, there would still be a hard choice to make between fairness and accuracy. The current algorithms are great at optimizing for better performances without considering diversity and inclusion. If we hardcode safety measures in to guarantee fairness, it would likely compromise the overall performance. It all comes down to what kind of society we want to have for the future.
The road ahead
Slowly but surely, we are reaching a consensus that algorithms are increasingly becoming an important part of our lives, both on an individual level and societal level. Most people would agree that we need to make responsible algorithmic tools, which require transparency, fairness, and privacy. But it will be a hard battle because the ultimate fight for responsible algorithms is a fight for fair shares of power and control, split between individuals, tech giants, and nation-states. But there are steps we can take to bring that future closer.
First of all, we need consumer algorithmic literacy. Understanding how algorithms work, their strengths, and their shortcomings will be very important in an algorithm-driven future. Secondly, we need to advocate for algorithmic transparency. We need more watchdogs like independent research firms and investigative journalism to hold tech companies accountable. We also need to implement a human element in the algorithm design and monitoring process If we want to make these algorithmic tools truly work for us and reflect our values. And finally, we need lawmakers to bring regulations up to speed to protect the users and stop any bad actors.