While many of us are still fascinated by AI’s capabilities, it has let us down in many ways this year. So, what has gone wrong? While artificial intelligence has so much capacity for good, there are inherent dangers that we as a society are grappling with. AI has spread misinformation, broken our trust, and changed industries. Here, we’ll discuss six ways that AI has utterly failed us in 2024.
1. Spotify Wrapped Disappointment
It has become a tradition to wait for the release of your Spotify Wrapped results each year. The detailed insights are fun to share with friends and see your music listening year in review. However, this year was a huge letdown. The company relied heavily on AI to generate content which fell flat. Spotify didn’t provide listeners with their top music genres or their top albums. Instead, listeners were given a “Music Evolution” list that detailed a selection of interestingly titled musical micro-genres. Mine included Mulled Cider Folk Phase, Pink Pilates Princess Strut Pop Season, and Breakup Slow Dance Country Moment. These results seemed strange and inaccurate. Some users even asked if AI hallucinated these results.
2. Taylor Swift AI-Generated Endorsement
President-Elect Trump released an AI-generated endorsement from Taylor Swift during the election. Swift responded a month later with an endorsement for Kamala Harris. She said, “It really conjured up my fears around AI and the dangers of spreading misinformation.” But this was not the first time that AI has targeted celebrities this year to promote products, scams, and fake news. Swift was also used in a deepfake video earlier this year to sell Le Creuset cookware.
3. Netflix’s What Jennifer Did Controversy
In the true crime documentary What Jennifer Did, Netflix allegedly used AI-generated images to tell the story of Jennifer Pan. Viewers say that some of the images of Pan have irregularities in her hands, and the backgrounds contain strange, unidentifiable objects. The director, Jeremy Grimaldi, denied any AI use and said that Photoshop was used to conceal the identity of the person who provided the photo. Despite the controversy, this situation begs the question: Can viewers trust documentary films?
4. New York City Chatbot Failures
A New York City chatbot encouraged small business owners to break the law. The Associated Press reported that the AI tool “falsely suggested it is legal for an employer to fire a worker who complains about sexual harassment, doesn’t disclose a pregnancy, or refuses to cut their dreadlocks.” The chatbot disclaimer now says that it can’t provide legal advice. This is a prime example of how AI can make up information when asked specific questions that it may not have the correct information to, leading to serious consequences.
5. Holiday Coca-Cola Ad
Fast Company writer Michael Grothaus wrote that he is so over AI-generated art. He explained that the Coca-Cola holiday ads used to be magical and now they are just adding to the slop. This year’s ad was a remake of the “Holidays Are Coming” ad from 1995. Except it had the common characteristics of AI-generated art that somehow just don’t relate to reality. Here’s a look at the ad in question…
6. Shrimp Jesus Images
Facebook has become littered with AI-generated content and images. One of the most notable from this year is images of Jesus circulating around the platform. “Shrimp Jesus” was one image that got a lot of traction. The AI image fused Jesus Christ with underwater shrimp. Other AI-generated photos of Jesus began popping up everywhere. A study from Stanford University took a deeper look into the spam and scams rampant on Facebook. It found that the AI-generated posts they analyzed received a ton of engagement from hundreds of millions of people.
Greatest AI Failures of 2024
What does this mean for the future of AI? More oversight is needed to prevent AI failures. Left unchecked, this technology can have unforeseen consequences that are far more serious than a disappointing Spotify Wrapped. Until we recognize AI’s power and complexity, failures will continue to happen. There is still a need to create better models and improve the inner workings of artificial intelligence before it becomes something people can rely on.
Have you experienced any AI failures? Let us know in the comments.
Read More
- Has Cancel Culture Gone Too Far? A Look at Everything Cancelled in 2024
- Are Your Devices Spying on You? The Truth About Being Tracked
Teri Monroe started her career in communications working for local government and nonprofits. Today, she is a freelance finance and lifestyle writer and small business owner. In her spare time, she loves golfing with her husband, taking her dog Milo on long walks, and playing pickleball with friends.
Leave a Reply