DONATE

The All-Encompassing Algorithm

digital citizenship digital safety digital well-being digital4good Dec 15, 2023
Navigating the Complex World of Social Media Algorithms

By: Elliot Nelson, Seasonal Public Relations Intern | Digital4Good

 

How many times have you scrolled through your feed on your favorite social media site, only to see a video of something you were talking to a friend about earlier? Or perhaps you were shopping for a new laptop online, then saw an ad for the exact same model on another social media site.

 

The world of social media is becoming more and more tailored to each individual, and it accomplishes this individualization through an algorithm. An algorithm, as described by Oxford Dictionary, is a set of rules determining the best combination for a personalized customer feed.

 

The most common algorithm can be found on social media. Sites like TikTok or Instagram are constantly calculating the best approach to market something to you. The emphasis is almost never on what the product is, but how it can be best pushed out to you. It’s the reason why seemingly inconspicuous Google searches are mirrored by your “For You” page; companies are always taking note of your digital footprint to better connect with you on their site.

 

The Dangers

 

While algorithms might seem to be a useful innovation, a lot of evidence proves the opposite. 

 

The most common dangers include

  • Online Echo Chambers
  • Excessive Data Collection
  • Algorithmic Biases
  • Idea Radicalization

 

Let’s examine each one and how to counteract the harmful effects.

 

Online Echo Chambers

Some of the deadliest phenomena that exist on the Internet are echo chambers related to confirmation bias. GCFGlobal defines an echo chamber as an environment where one’s consumption of media and information only supports their point of view. Echo chambers are most commonly associated with politics. I’m sure many of us can recall at least one user whose political postings were backed by highly biased sources.

 

Algorithms can identify the internal or external biases we have and provide us with sources, trustworthy or not, that fill that need for confirmation. In fact, even the most conscientious individuals can fall prey to confirmation bias.

 

While the echo chamber may seem inescapable, there are ways to combat it.

 

Solutions:

  • Identify your confirmation biases and how they affect your media usage.
  • Interact with different news sources for a diversified outlook.
  • Use common sense to evaluate sources and information.
  • Build digital media literacy through resources like Digital4Good.

 

If you can acquire these valuable skills, you’ll be well on your way to escaping the echo chamber.

 

Excessive Data Collection

Algorithms rely on user data and interactions to create targeted content. A common misconception is the extent to which personal information is shared with outside organizations. A recently conducted experiment by Skynova illustrated that a shocking 64% of business owners can access your information from social media sites.

 

As if that wasn’t bad enough, the information wasn’t just  names and emails; it also included details like user location, browsing history, and personal financial information. While most businesses will not misuse this information themselves, the mass data collection can quickly become a problem in the event of a data leak or breach.

 

Hackers have developed new tools that have revolutionized their ability to get into websites and companies to extract information. The University of Delaware reports that 694 data breaches have already occurred in 2023 to date. While that number might not seem alarming, it accounts for nearly 612.4 million records of data.

 

Here are some ways you can fight this overgrowth of data collection.

 

Solutions:

  • Use Privacy Tools that block cookies and data collection, such asDuckDuckGo or PrivacyBadger.
  • Limit Sharing of Personal Information.
  • Use VPNs or similar tools to protect your data online.
  • Create a social media plan for reducing data usage.

 

Algorithmic Biases

While algorithms may be efficient, it’s important to acknowledge that they are not completely foolproof. Algorithms, according to AIMultiple, are built off training sets that develop a certain line of logic that is employed for each person’s individualized feeds. The problem is that the training sets are created by humans, and, as such, factor in human biases.

 

A major example of this is racial underrepresentation, specifically for minorities and communities of color. A Stanford-published article on AI racial biases in healthcare illustrates how the research subjects used to create the healthcare AIs tended to be of a certain racial and ethnic background, leading to a disparity in the AI’s programming and understanding of certain racial groups.

 

Here are some ways to fight these harmful biases.

 

Solutions:

  • Push for increased data quality in algorithms through legislation or company policies.
  • Be a catalyst for algorithm fairness audits.
  • Advocate for user control and responsiveness.
  • Call for public accountability.
  • Enlist the help of organizations like Digital4Good to contribute to the conversation.

 

Idea Radicalization

Idea radicalization poses the biggest risk to media users. Thanks to algorithms, users will often develop extreme views on their topics of interest.

 

These extreme views become dangerous, as they are often misinformed. According to the Police Chief Magazine, radicalization often results in violence against oneself or others. In addition, radicalization of ideas due to social media algorithms has been proven to build social imaginaries (beliefs for how certain things should be) that are incorrect or dangerous to racial, ethnic, and social groups.

 

So, how can we steer clear of these risks and ensure our (and other people’s) digital well-being and safety?

 

Solutions:

  • Fact-check your media with other sources.
  • Report conflicts of interest, misinformation, and disinformation on news sites.
  • Challenge your own internalized ideas and beliefs.
  • Read articles that reference (or link to) multiple sources, like Digital4Good’s blogs.

 

Looking to the Future

Avoiding the dangers of algorithms doesn’t mean avoiding social media entirely. It just means you need to be cautious of the ways that all-aware algorithms use your personal information and data to create your personalized feed. Challenge yourself to investigate the reasoning behind your personal biases and develop a more diversified, nuanced view of the world. 

 

Algorithms themselves are not necessarily bad in nature. They have the potential to revolutionize the world we live in, but they need to be certified in their accuracy, accessibility, and safety. This future is only possible through users of algorithms like you advocating for desperately needed changes.

 

You are not alone in this fight. Organizations like Digital4Good are proud to stand behind you, empowering you with the necessary tools and information to succeed. 

 



Are you an expert in AI and generative programs? Consider joining our Digital4Good Summit! Learn more at digital4good.net.

 

 

 

Stay connected with news and updates!

Join our mailing list to receive the latest news and updates from our team.
Don't worry, your information will not be shared.

We hate SPAM. We will never sell your information, for any reason.