How People Are Getting News in 2025: Social Media, Video, AI — and Growing Misinformation Worries
A Changing News Landscape
Traditional news channels such as TV, print, and standard websites are losing ground globally. Instead, social media and video platforms are rapidly becoming dominant sources of daily news. This shift is documented in the 2025 Digital News Report by the Reuters Institute.
Moreover, the rise of news via AI chatbots and podcasters points to a broader, more fragmented media ecosystem.
Social Media & Video Platforms Take Over
First, social media as a primary news source has surged—from just 4% of U.S. adults in 2015 to 34% in 2025. Platforms like YouTube, Facebook, Instagram, X, and TikTok are now more widely used for news than traditional outlets.
Secondly, younger adults especially lean on these formats. For example, 44% of 18–24-year-olds and 38% of 25–34-year-olds in the U.S. cite social media and video platforms as their main news source.
Podcasts, Influencers & AI Chatbots Emerge
Then, podcasts and influencers have become key voices in the news space. In the U.S., Joe Rogan's podcast content reaches roughly one-fifth of listeners, especially younger and politically engaged men. In other countries, local creators on TikTok or YouTube play a similar role.
Furthermore, AI chatbots like ChatGPT and Google’s Gemini are now being used by about 7% of adults weekly for news, rising to 15% among under-25s.
Why Trust and Misinformation Are Rising Concerns
Meanwhile, trust in news remains low globally—hovering around 40%. People express deep concerns that AI-summarized feeds and influencer-led news may obscure credibility.
Also, misinformation is now considered the top short‑term global risk by the World Economic Forum. That includes both fabricated narratives and AI-generated deepfakes.
AI-Generated Misinformation: A Unique Threat
Moreover, advanced AI tools are now used to craft deepfakes, synthetic news, and influencer personas—some of which go viral. Research shows that AI-generated misinformation is more often positive in tone and more viral, even when originating from small accounts.
Meanwhile, investigators have uncovered techniques where state actors produce thousands of bogus articles tailored to train AI systems—grooming them to repeat disinformation. This practice is referred to as “LLM grooming.”
Warnings About AI Influencers & Content Authenticity
Also, new AI influencers—like the virtual persona “Mia Zelu”—are drawing criticism for eroding trust. Surveys report that 82% of social media users doubt content authenticity, especially when AI tools lack transparency.
Efforts to Combat Misinformation
First, some platforms are starting to label AI-generated content. Reddit users are urging broader adoption of mandatory warnings on synthetic images and videos to help reduce confusion.
Second, experiments in France and Germany show that following verified news sources on social media regularly for just two weeks improves news literacy and the ability to identify false stories.
In places like the Philippines, the spread of misinformation is particularly high, with two-thirds expressing concern. Many rely on government websites, trusted news brands, and fact-checking sites to verify misleading content.
Role of Journalism & Ethical AI Use
Meanwhile, newsrooms are adapting under pressure. Ethical AI guidelines now emphasize labeling AI-generated material, enforcing human oversight, and preserving journalistic integrity. Tech tools assist—without replacing—experienced reporters.
Regional Focus: Australian & Filipino Trends
Furthermore, in Australia, a recent report found that more people now get news via social media than from traditional outlets for the first time. At the same time, trust and news avoidance remain persistent.
Also, Filipino anxiety over misinformation is at an all-time high, 67% rate it considers a serious issue. Despite heavy reliance on social media for news (66%), many turn to verification tools and literacy training.
Implications for Consumers & Media
Above all, news consumers in 2025 face an increasing volume of information sources—some credible, others suspect. Relying on influencers or AI alone carries risk. Mixing formats—like trusted outlets, podcasts, and verified social feeds—may offer a safer route.
Secondly, policymakers and tech companies must invest in digital literacy programs, transparent algorithms, and AI accountability frameworks to rebuild public trust.
Finally, news organizations must balance efficiency with credibility. The rise of “Super Journalists”—experts who collaborate with AI for deeper context—is one promising model.
Final Thoughts
Ultimately, the way people get their news in 2025 looks very different. Social media, video platforms, influencers, podcasts, and AI tools dominate attention. However, this shift brings rising worry over authenticity and misinformation.
To stay informed—and not misled—readers should actively seek reliable sources, cross-check content, and support clear labeling of AI-generated material. As the media ecosystem evolves, media literacy and editorial transparency are no longer optional—they are essential.