This is the second in a series of… well, at least two desktop literature reviews I’ve been writing to agitate my withered grey cells.

While I’ve had some time and space to trawl through recent open-access papers over the Australian summer holidays, I am expecting the arrival of a third child in February. So I make no commitments to future output, but reader, I am enjoying hard-staring at interesting topics.

After last month’s review of Information Seeking and Gen AI, which was peppered with some scepticism, I thought I’d try and (mostly) escape the AI zeitgeist and look into how we’re being impacted by information overload.

Tatsuo Miyajima, Connecting with Everything, 2017

Tatsuo Miyajima, Connecting with Everything, 2017

This is a term which has rattled around my head for almost a decade. Both in my personal life, and at work, I have spent a lot of time thinking about the increase in information I am bombarded with on a daily basis. Back in 2017, when I was working as a trainer, I actually taught a course called ‘Information Overload’, meant to help attendees find and verify trustworthy information.

In hindsight, 2017 seems like the good old days compared to where we are now. (Cue The Office quote: “I wish there was a way to know you were in the good old days…”). If you can cast your mind back to that innocent era:

  • We didn’t know about Cambridge Analytica’s voter manipulation (although we’d been hit by it), they were finally exposed in March 2018.
  • We had no idea the possible depths and volume of misinformation which would emerge through the COVID-19 pandemic.
  • TikTok was barely up and running, with only 100 million users at the end of 2016 (vs ~1.5 billion today).
  • The once interesting, influential and beloved (to some) Twitter was independent and years away from the ‘free speech’ takeover and transition to X.

Phew. And that’s not to mention the slop we’re being served by AI, because I’m trying not to mention it this month.

So, waffle aside. I have tried to pull together what I think is a useful review on the cognitive costs of too much information in the mid-2020s and what we can do about it. As with my previous review, I have sourced predominantly from recent, open-access research, along with a few classics for grounding.

My main argument here is that contemporary information overload isn’t simply about having too much information. It’s a mismatch between human cognitive limits and the digital information environments we’ve built. When this overload interacts with social influence, algorithmic feeds and emotional cues, it degrades our decision quality, makes us rely more heavily on cognitive shortcuts, and accelerates the spread of misinformation. I’ll show that while individual coping strategies help at the margins, the evidence increasingly suggests we need structural and regulatory intervention alongside improvements in information literacy to make any real difference.

Jump To

Scope and approach

This is a desktop literature review drawing primarily on recent (2021-2025), open-access research across psychology, information systems, media studies and public health. Sources were identified through targeted searches of Google Scholar and publisher databases using terms including information overload, choice overload, decision paralysis, misinformation sharing and social media cognition. Foundational “classic” works are included where they remain theoretically influential (e.g. Simon, Miller, Tversky & Kahneman). The review prioritises empirical studies, systematic reviews and meta-analyses, but is not exhaustive.

The Information Paradox

We’ve never had more information at our fingertips. A few taps on a smartphone can summon answers to virtually any question, from restaurant reviews to medical advice to breaking news from around the globe. Yet paradoxically, all this access to information hasn’t made us better decision-makers. If anything, the sheer volume of information we consume daily might be making our decisions worse.

Researchers have been studying information overload for decades. The concept describes what happens when the amount of information available exceeds our capacity to process it effectively (Eppler & Mengis, 2004). As the digital landscape has exploded with social media, 24/7 news cycles and AI-generated content, understanding how information overload affects our decision-making and what we can do about it, has become increasingly important.

The Limits of the Human Mind

To understand why information overload is such a problem, we need to first appreciate the limitations of human cognition. Back in 1956, psychologist George Miller published his famous paper on “The Magical Number Seven, Plus or Minus Two” demonstrating that human working memory can only hold about seven pieces of information at a time (Miller, 1956). This is a feature of how our brains evolved, but it creates a fundamental bottleneck when we’re faced with the endless streams of data flowing through modern life.

This bottleneck highlights a crucial distinction made by Clay Shirky (2008) nearly two decades ago: ‘It’s not information overload. It’s filter failure.’ In the past, physical and economic barriers acted as filters. Publishers, editors and limited broadcast hours curated what reached us. Today, those external filters have collapsed, leaving our biological filters to do all the heavy lifting against an infinite stream. We are trying to use a biological sieve to hold back a digital tsunami.

Herbert Simon, who would later win a Nobel Prize for his work on decision-making, recognised this problem early. He proposed the concept of bounded rationality: the idea that humans don’t make perfectly rational decisions because we simply don’t have the cognitive resources to weigh every option (Simon, 1955). Instead, we “satisfice”: we look for options that are good enough rather than optimal. This works fine when we’re choosing what to have for lunch, but can lead to serious problems when the stakes are higher.

When we’re overloaded with information, our decision-making shortcuts can backfire spectacularly. Tversky and Kahneman’s groundbreaking research on heuristics and biases showed that humans rely on mental shortcuts, rules of thumb, to make judgments, especially under uncertainty (Tversky & Kahneman, 1974). These heuristics are usually helpful, but they can lead us astray. The availability heuristic, for example, causes us to overestimate the likelihood of events that come easily to mind. So if you’ve just read five news articles about plane crashes, you might overestimate the danger of flying, even though the statistics say otherwise. Information overload feeds these biases by constantly flooding our minds with memorable, emotionally charged content.

Decision Paralysis: Too Many Choices, Too Little Action

One of the most studied consequences of information overload is choice overload: what happens when we’re faced with too many options. Intuitively, more choice should be better. But research tells a different story. When people are presented with an overwhelming number of alternatives, they often experience decision paralysis: the inability to make any choice at all (Boby, 2024; Misuraca et al., 2024).

It’s worth noting that much of the choice overload literature is still dominated by controlled experiments and platform-specific case studies. Effect sizes vary substantially and not all studies find paralysis effects, particularly when users have strong prior preferences or domain expertise. This suggests that overload is not universal, but contingent: it emerges most strongly when high option volume is paired with low preference clarity and poor choice architecture.

This phenomenon plays out across many domains. In online food ordering, for example, researchers found that an excessive number of menu options significantly increased decision paralysis, particularly among younger users who are already heavy consumers of digital content (Boby, 2024). The same pattern appears in everything from retirement savings (where too many investment options lead people to save less) to dating apps, where endless swiping can lead to paradoxically fewer meaningful connections (Thomas et al., 2025).

The moderators of choice overload are complex. Research suggests it’s not solely driven by the raw number of options, it’s also about how complex those options are, how certain people are about their preferences and how the choices are presented (Misuraca et al., 2024). Clear categorisation and filtering tools can help, which is why good design matters so much in digital environments.

The Social Media Factor

If information overload was a problem before social media, it’s become a crisis in the age of endless scrolling. The platforms we use daily are designed to maximise engagement, which means maximising the amount of stimulating content we consume. The effects on our cognition and mental health are becoming increasingly clear.

A recent systematic review and meta-analysis of short-form video use (e.g. TikTok, Instagram Reels) found that increased use is associated with poorer cognition, particularly attention and inhibitory control, as well as higher levels of stress and anxiety (Nguyen et al., 2025). This matters for decision-making because attentional control is exactly what we need to sift through information carefully and resist impulsive choices.

Social media also creates powerful dynamics of social influence that can distort our judgment. A large-scale experiment on a social news platform found that positive ratings created significant herding effects, when people saw that others had upvoted a comment, they were 32% more likely to upvote it themselves, regardless of the comment’s actual quality (Muchnik et al., 2013). This positive herding accumulated over time, inflating final ratings by an average of 25%. Interestingly, negative herding was naturally corrected by users, but positive bias wasn’t. This asymmetry helps explain why misinformation that gets early positive engagement can spread so rapidly.

A recurring limitation across this literature is its reliance on correlational designs and self-reported use. While associations between short-form video consumption and cognitive outcomes are robust, causal pathways remain contested, particularly given the likelihood of reciprocal effects (e.g. attentional difficulties driving platform use rather than resulting from it). Longitudinal and quasi-experimental work remains comparatively scarce.

The Misinformation Crisis

The combination of information overload, cognitive biases and social influence creates fertile ground for misinformation. When people feel overwhelmed by information, particularly during crises like the COVID-19 pandemic, they’re more likely to share content without verifying it (Huang et al., 2022). This isn’t because they’re careless or stupid; it’s a natural coping mechanism. Sharing information creates a sense of control and connection during uncertain times. But it also means that false or misleading content can spread rapidly.

Research has identified several key drivers of unverified information sharing. Perceived severity plays a major role: when people believe a threat is serious, they’re more motivated to share warnings, even if those warnings haven’t been verified (Zhang et al., 2024). Herding behaviour is another factor, when people see others sharing information, they’re more likely to share it themselves, discounting their own judgment in favour of following the crowd. And anxiety, which is often triggered by information overload itself, leads people to share content impulsively as a way of coping with emotional distress.

Problematic social media use amplifies these effects. Users who show signs of excessive or maladaptive social media use are significantly more likely to believe false news and intend to engage with it through clicks, likes and shares (Meshi & Molina, 2025). This creates a troubling feedback loop: the people most susceptible to misinformation are also the heaviest users of the platforms where misinformation spreads.

Importantly, many misinformation studies are crisis-specific (COVID-19, elections), raising questions about generalisability to everyday information environments. There is also limited cross-cultural work, despite strong evidence that trust in institutions, media systems and platform governance varies substantially across contexts.

Social media companies bear significant responsibility for these dynamics. Their algorithms prioritise engaging content, which often means sensational or emotionally charged material, regardless of accuracy (Denniss & Lindberg, 2025). The limited commitment to content moderation, combined with financial incentives that reward engagement over accuracy, means that misinformation continues to thrive despite growing awareness of the problem.

What Can We Do About It?

The picture painted by this research might seem bleak, but there are evidence-based strategies for combating information overload and improving our decision-making. However, the weight of evidence suggests that individual-level solutions, while helpful, cannot address a problem whose roots are fundamentally systemic.

Individual Strategies (Necessary But Insufficient)

At the individual level, several approaches can help at the margins. Developing information literacy: learning to critically evaluate sources, recognise emotional manipulation and resist impulsive sharing, remains valuable (Shahrzadi et al., 2024). Chunking and organising information helps work around working memory limits (Miller, 1956). Being aware of our own biases and deliberately slowing down can counteract over-reliance on heuristics.

Research on AI-assisted decision-making suggests that when tools are well-designed and transparent about limitations, humans can sometimes achieve complementary performance better than either humans or AI alone (Steyvers & Kumar, 2024). But this requires building appropriate mental models of AI capabilities, which takes time and effort.

The problem: These strategies place the burden on individuals to resist environments specifically engineered to exploit cognitive vulnerabilities. As this review has shown, our working memory constraints (Miller, 1956), reliance on heuristics (Tversky & Kahneman, 1974), susceptibility to social influence (Muchnik et al., 2013), and limited attentional control (Nguyen et al., 2025) are intrinsic features of human cognition.

Structural and Design Interventions (Where the Evidence Points)

The research reviewed here consistently indicates that environmental design shapes outcomes more powerfully than individual effort. When online platforms implement effective filtering, categorisation and prioritisation tools, they reduce cognitive load and improve decision quality (Arnold et al., 2023). Nudging interventions, such as simple prompts asking users to consider accuracy before sharing, significantly reduce misinformation spread (Denniss & Lindberg, 2025). Warning labels on misleading content can help, though effectiveness depends heavily on design choices.

Crucially, these interventions work with human cognition rather than against it. They don’t require users to be more rational or less biased; they change the choice architecture to make better decisions easier.

The challenge is implementation. Many structural improvements run counter to platform business models that prioritise engagement over accuracy. As the social influence research demonstrates, algorithms that amplify engaging content create herding effects around misinformation (Muchnik et al., 2013). Financial incentives reward sensationalism over veracity (Denniss & Lindberg, 2025). Good design exists, but platforms often lack motivation to deploy it.

The Role of Regulation (Increasingly Necessary)

This brings us to an uncomfortable conclusion: meaningful improvement likely requires regulatory intervention, not just voluntary platform changes or individual upskilling. The mechanisms driving information overload and misinformation are embedded in the business models and algorithmic systems of major platforms.

Researchers have called for increased monitoring and regulation including requirements for algorithmic transparency, more robust content moderation, and accountability for harmful design choices (Denniss & Lindberg, 2025; Clemons et al., 2025). Some have suggested international treaty frameworks similar to tobacco control might be necessary to address what amounts to a global public health crisis.

The evidence supports this escalation. Just as we didn’t solve air pollution by teaching people to hold their breath, we won’t solve information overload by teaching people to think harder. We need digital filters to become standard, just as physical filters became mandatory for cigarettes and catalytic converters for cars. When human cognitive architecture meets digitally engineered environments designed to maximise engagement, individual willpower is not a match for structural incentives.

This isn’t to say individual strategies are worthless. They help people cope in the short term. But if we’re serious about addressing information overload and its consequences, the research points clearly toward systemic intervention as the primary solution, with education and personal strategies as important but secondary supports.

Looking Ahead

Information overload isn’t going away. If anything, the proliferation of AI-generated content promises to accelerate the problem further, creating ever more text, images and video competing for our limited attention. But understanding the mechanisms of overload, how our cognitive limitations interact with the design of digital environments, gives us tools to fight back.

The research is clear: we’re not built to handle the information environment we’ve created. Our working memory has limits. Our decision-making relies on shortcuts that can be exploited. And social dynamics amplify our individual vulnerabilities. But the same research also offers hope. By designing better systems, developing better skills and advocating for better policies, we can create an information landscape that supports rather than undermines good decision-making.

If information overload is a design problem as much as a cognitive one, then improving decision-making may depend less on fixing human limitations and more on taking responsibility for the systems we continue to build around them.

What We Still Don’t Know (And Why It Matters)

The research reviewed here points to several critical gaps that should concern anyone trying to navigate, or design for, contemporary information environments:

Do interventions actually stick? Most studies on accuracy nudges and warning labels measure immediate effects: does the person share less misinformation right now? But we don’t know if these interventions produce lasting changes or whether users simply habituate to them over weeks or months. This matters because platforms need to know whether anti-misinformation measures are sustainable investments or require constant reinvention to stay effective.

What’s actually happening in our heads? We know information overload correlates with poor decisions, but the specific cognitive pathways remain surprisingly murky. Is it primarily attentional depletion? Working memory saturation? Emotional dysregulation? Understanding these mechanisms would help us design targeted interventions rather than throwing spaghetti at the wall.

Can good design beat bad incentives at scale? The filtering and categorisation tools that reduce overload in lab studies often fail or underperform in real-world platforms where engagement metrics dominate. We need more evidence about whether thoughtful information architecture can meaningfully improve decision quality when deployed at the scale of billions of users, or whether perverse incentive structures will always undermine good design.

Answering these questions will require researchers to gain ethical access to platform data, transparency about algorithmic systems, and longitudinal study designs that follow users over months or years rather than minutes or hours. Until then, we’re making policy and design decisions with incomplete maps.


AI assistance disclosure: While the analysis and perspectives here are mine, I used generative AI tools to support the research process. I used Google’s Notebook LM to help with coding and organising the papers I’d collated. I used Anthropic’s Claude Sonnet 4.5 to verify that my interpretations aligned with what the papers actually claimed, to check citation accuracy, and to format the references consistently. All the sources below are real, publicly available papers. The synthesis, argument and conclusions are my own.


References

Arnold, M., Goldschmitt, M., & Rigotti, T. (2023). Dealing with information overload: A comprehensive review. Frontiers in Psychology, 14, 1122200. https://doi.org/10.3389/fpsyg.2023.1122200

Boby, J. (2024). An analysis of the impact of choice overload on inducing decision paralysis in the online food ordering industry. International Journal of Electronic Commerce Studies, 15(1), 1-24. https://doi.org/10.14445/23939125/IJEMS-V11I6P101

Clemons, E. K., Dewan, R. M., Kauffman, R. J., & Weber, T. A. (2025). Managing disinformation on social media platforms. Electronic Markets, 35, 52. https://doi.org/10.1007/s12525-025-00796-6

Denniss, E., & Lindberg, R. (2025). Social media and the spread of misinformation: Infectious and a threat to public health. Health Promotion International, 40(2), daaf023. https://doi.org/10.1093/heapro/daaf023

Eppler, M. J., & Mengis, J. (2004). The concept of information overload: A review of literature from organization science, accounting, marketing, MIS, and related disciplines. The Information Society, 20(5), 325-344. https://doi.org/10.1080/01972240490507974

Huang, Q., Lei, S., & Ni, B. (2022). Perceived information overload and unverified information sharing on WeChat amid the COVID-19 pandemic: A moderated mediation model of anxiety and perceived herd. Frontiers in Psychology, 13, 837820. https://doi.org/10.3389/fpsyg.2022.837820

Meshi, D., & Molina, M. D. (2025). Problematic social media use is associated with believing in and engaging with fake news. PLOS ONE, 20(5), e0321361. https://doi.org/10.1371/journal.pone.0321361

Miller, G. A. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychological Review, 63(2), 81-97. https://doi.org/10.1037/h0043158

Misuraca, R., Ferrara, A., & Ferrara, E. (2024). On the advantages and disadvantages of choice: Future research directions in choice overload and its effects on decision-making. Frontiers in Psychology, 15, 1323231. https://doi.org/10.3389/fpsyg.2024.1290359

Muchnik, L., Aral, S., & Taylor, S. J. (2013). Social influence bias: A randomized experiment. Science, 341(6146), 647-651. https://doi.org/10.1126/science.1240466

Nguyen, T. V., Ryan, R. M., & Deci, E. L. (2025). Feeds, feelings, and focus: A systematic review and meta-analysis examining the cognitive and mental health correlates of short-form video use. Psychological Bulletin, 151(9), 1125–1146. https://doi.org/10.1037/bul0000498

Shahrzadi, L., Mansouri, A., & Nikakhlag, S. (2024). Causes, consequences, and strategies to deal with information overload: A scoping review. International Journal of Information Management Data Insights, 4, 100261. https://doi.org/10.1016/j.jjimei.2024.100261

Shirky, C. (2008, September 20). Web 2.0 Expo NY: Clay Shirky (shirky.com) it’s not information overload. It’s filter failure [Video]. YouTube. https://www.youtube.com/watch?v=LabqeJEOQyI

Simon, H. A. (1955). A behavioral model of rational choice. The Quarterly Journal of Economics, 69(1), 99-118. https://doi.org/10.2307/1884852

Steyvers, M., & Kumar, A. (2024). Three challenges for AI-assisted decision-making. Perspectives on Psychological Science, 19(4), 722-734. https://doi.org/10.1177/17456916231181102

Thomas, A. G., Finkel, E. J., & Eastwick, P. W. (2025). Decision-making on dating apps: Is swiping more, less, and swiping right wrong? Media Psychology. https://doi.org/10.1080/15213269.2025.2555430

Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185(4157), 1124-1131. https://www.jstor.org/stable/1738360

Zhang, Z., Cheng, Z., Gu, T., & Zhang, Y. (2024). Determinants of users’ unverified information sharing on social media platforms: A herding behavior perspective. Acta Psychologica, 248, 104345. https://doi.org/10.1016/j.actpsy.2024.104345