Hyper-ed Hyperreality: the enhancing effects of technological progress on the postmodern condition of the hyperreal
Author: Mattia Spagnuolo
Abstract and contextual review
Through the lens of critical technical practice, I hereby present a study that aims to gather and link together existing research concerning hyperreality, artificial intelligence and social media. My objective is to stress the consequences that technological progress is having on our perception of reality, in order to raise awareness on the issue. Firstly, I will introduce Baudrillard’s theory of hyperreality and how it relates to us today, then, referencing mostly the works of Eli Pariser, Zeynep Tufekci and Nicholas Carr, I will address how technological advances are enhancing this condition with an impact both on society and on the self. My research ultimately culminates in an artefact meant to use technology as tool for social commentary.
What is Hyperreality?
In order to fully understand the concept of hyperreality, introduced by the French philosopher Jean Baudrillard towards the end of the 20th century, we must first explore the ideologies and notions that were brought about by Postmodernism and, more specifically, Poststructuralism.
Postmodernism is a late 20th century movement that references the fragmentation of old narratives and is characterized by a broad skepticism, subjectivism and a general suspicion of reason. To the post-structuralist, there is no stable point from which can anyone assert they have arrived at the truth. While the Enlightenment thinkers thought that there is “a way that things are” which can be reached through scientific rationality, the post-structuralists argue that all that we, as humans beings, can ever hope to have access to is a set of cultural and scientific constructions that were created in an attempt to understand reality. What we think is “the truth” is really the current dominant narrative of our own culture and it is being constantly redefined through the interaction with other cultures that interpret reality in a different way.
Baudrillard argues that, in order to compensate for the fragmentation and disconnection in today’s society, we have become absolutely obsessed with visual imagery. We see our lives through the lens of a complex network of signs and symbols – fed to us by mass-media – and we understand our reality only in terms of how it compares to what is playing out on the screens in front of us. All this media content dictates who we are, our relationships, what we buy and the way we think of ourselves and of the world around us.
The visual bombardment promulgated by the media is nothing but an imitation of our lives, but we in turn base our lives on it. This cycle goes on long enough, says Baudrillard, that such media-generated copies of reality become the real. Our perception of reality is so mediated by the mass-media representation of it that we can no longer make distinctions between representations of reality and reality itself. Hence, we live in a simulation that is not housed in computer hardware, but in our own heads. Baudrillard denominates such simulation, imitation and fine substitution of reality as ‘hyperreality’. 
Enhancing effect of technological progress on the hyperreal: impact on society
A subject of the postmodern era is constantly immersed in an endless torrent of information. New media – such as the Internet, mobile technologies and social networks – given their ubiquity and potential, are able not only to seize the individual in the pervasive flow of information, but also to include it in the process of media creation itself. A process that is key in shaping public consciousness and in creating public opinion. 
It is important to point out that today’s state of uncertainty about reality arises not from the lack of information, but from information itself and even from an excess of it, namely from hyper-information. Such situation where information no longer informs and where the truth is indistinguishable from its counterfeit simulation, becomes immediately recognizable within today’s climate of fake news and disinformation. 
The problem of obtaining objective and unfiltered knowledge is therefore substantial in today’s world. Even those who thoughtfully analyze the incoming information and reflect on current events and facts – an exception in the process of media creation and transmission – face a task that is becoming increasingly more difficult. The following paragraphs explore how the remarkable pace of technological progress is silently – but drastically – aggravating the already alarming condition of our hyperreal perception of reality.
The algorithms that run on our machines are getting more and more intelligent. Although the fields of Machine Learning (ML) and Artificial Intelligent (AI) allow for a considerable acceleration in our understanding of many areas of study and research, they also introduce equivalently significant problems and threats that must be properly considered and addressed.
One of them emerges from the phenomenon of content personalization. Content personalization is a strategy that relies on visitor data to deliver relevant content based on audience interests and motivations. It is an extremely useful tool that helps companies provide consumers with the content and products they are more likely to appreciate, but it implies further consequences. The media creation process has fully incorporated such algorithms, which now run in all search engines and social media networks. The result of this is an invisible editing of the Web, which went from being a common source of shared information to a highly tailored marketplace of targeted ads. The personalization is so selective that it is always in accordance with our beliefs and view of the world. Every time we think we’re getting the full picture, even in a theoretically objective space such as a search engine, we are using a manipulated piece of technology bent more on customer satisfaction than truth. Ideologically diverse content that contradicts our values is edited out without us seeing it taking place. Already in 2011, Google used more than fifty cues, from where we are located, to which computer and browser we are using, to personally tailor our query results. Eli Pariser defines this condition as ‘filter bubble’: the personal unique universe of information that each of us lives in online. What’s in this bubble depends on who we are and what we do, but we do not decide what gets in and more importantly we don’t see what gets edited out. [a1]
The problem does not end here. By churning through this massive amount of data, ML algorithms learn a lot about us. They can quite easily infer things like our ethnicity, religious and political views, personality traits, intelligence, happiness, use of addictive substances, parental separation, age, gender and sexual orientation, just from Facebook likes and web searches. This rich record of information is heavily exploited to construct what is known as a persuasion architecture of the Web. Such architectures algorithmically arrange the content we see in the order that the algorithm thinks will entice us to stay on the site longer. What the algorithm picks to show us can affect not only our emotions, but also our political views and behavior. Thanks to the versatility of the digital world, persuasive architectures can be built at the scale of billions and they can be deployed at individuals one by one by figuring out their weaknesses, preferences and views. On top of that, they can be sent to everyone’s phone private screen, so it is not visible to us.
As citizens, we no longer know if we are seeing the same information that anybody else is, and without a common basis of information public debate is becoming impossible. Our opinions and emotions can now be drastically shaped by autonomous algorithms whose complexity is escaping the grasp of our full understanding. The tragedy, as highlighted by techno-sociologist Zeynep Tufekci, is that we are building this infrastructure of surveillance authoritarianism merely to get people to click on ads. Imagine what a state could do with the immense amount of data it has on its citizens.
“If the people in power are using these algorithms to quietly watch us, to judge us and to nudge us, to predict and identify the troublemakers and the rebels, to deploy persuasion architectures at scale and to manipulate individuals one by one using their personal, individual weaknesses and vulnerabilities, and if they’re doing it at scale through our private screens so that we don’t even know what our fellow citizens and neighbors are seeing, that authoritarianism will envelop us like a spiderweb and we may not even know we’re in it.” [a2]
In the current imagination, dystopias that result from the advent of artificial intelligence are not adequate. We must not fear what AI and ML will do to us on their own. It is how the people in control of these massive media structures will use such algorithms that should raise our concerns. Much of the technology that undermines our freedom in the near future revolves around the business of harvesting and selling our own data. Technology development and politics must ensure that such systems will support us in our human goals without violating our human values. We must demand full transparency in the way our data is being collected and used and we need to be able to exercise control over what is being shown to us by algorithmic decisions.
Enhancing effect of technological progress on the hyperreal: impact on the self
The contribution of the individual in the creation of the hyperreal becomes very evident when it comes to a specific and relatively new kind of media, that is social media. Social media encourages the fabrication and projection of artificial images of one’s self for the consumption of others, thus intensifying the already overloaded process of visual bombardment that characterizes today’s society. By sharing such images, individuals most often showcase and emphasize attributes of wealth, travel, romantic and sexual conquests, fun, fulfillment and happiness as if they were carefully crafting their own television commercial. At the same time, when consuming images of others, they compare them with their own lives and tend to adulate them and replicate them, hence actively participating in the never-ending cycle of replication of reality discussed previously. Needless to say, these depictions are usually based on unrealistic premises, thus creating an unrealistic perception of one’s self that does not reflect reality. There are two consequences that are worth analyzing.
Buying into these images can result in worshipping and imitating them. They become an unrealistic reference, which may consequently lead to a lack of realistic role models – key in one’s development and growth as an individual.
A second, and perhaps more vicious consequence of these projections, is that they alter our sense of what is normal. We tend to assume that without all the things that we see shown off on social media we are not normal. Without them, we feel flawed, different and alone. On top of this, revealing this lack of perfection would be an admission of failure in the “perfect image game” that everyone is playing. Since we have a limited reference of others, we might feel to be the victim of some unique nightmare that only seems to be happening to us. 
The new sense of what’s normal prevents the individual form seeking help, which is often a crucial step in the healing process. Believing that seeking help is abnormal is the ultimate form of disconnection and fragmentation in today’s society: we avoid what we actually need in order to be well. The identity crisis, symptom of the postmodern era, is accentuated since individuals are no longer capable of dealing with their own insecurities and uniqueness.
The perception of “normal” as something idealized and perfect – and anything else as a deviation from it – should warn us of the often-negative influence that social media has on our lives. Instead of bringing us closer, it ends up establishing a state of disconnection among each other and detachment from real life.
A new form of thinking
The aftermaths of technological progress are not only shaping what we think, but also how we think.
Thanks to the constant presence of text over the Internet and to the countless text messages we exchange on a daily basis, we may be reading more today than in the second part of the 20th century, when television was the main medium of choice. However, the quality of reading has drastically changed. A study published by University College of London has found that “people using the Internet exhibited a form of skimming activity, hopping from one source to another and rarely returning to any source they’d already visited. They typically read no more than one or two pages of an article or book before they would “bounce” out to another site. Sometimes they’d save a long article, but there’s no evidence that they ever went back and actually read it. As users “power browse” horizontally through content, it almost seems that they go online to avoid reading in the traditional sense”.
The activity of reading demands high levels of attention, as it is not an instinctive skill wired in our genes the way that speech is. Since we have to teach our minds to do it, the medium we use when practicing reading plays an important role in forming the neural circuits inside our brains. It has been shown that such circuits are almost infinitely malleable. The brain has the ability to reprogram itself on the fly, altering the way it functions. Thanks to our brain’s plasticity, it follows that the current process of adapting to new media technologies is directly reflected in our brains at a biological level.
As people’s mind become accustomed to the format of content that characterizes Internet media, traditional media have to adapt to the audience’s new expectations by shortening their articles, introducing summaries, pop-ups and content snippets, which does nothing but reinforcing the way the Internet is altering our minds. Old media have no choice but to play by the new-media rules.
Never has a communication system exerted such broad influence over our thoughts as the Internet does today. It follows that the design of the Web maps directly to the wiring of our brains. Drawing on terabytes of behavioral data, content-picking algorithms control how people find information and extract meaning from it. What Frederick Taylor did for the work of the hand, Google is doing for the work of the mind, in a process that has not surprisingly been named Digital Taylorism. In Google’s view, the more information we can access and the faster we can extract its core content, the more productive we become as thinkers. Ambiguity and diversity are not openings for insight and discussion, but bugs to be fixed. This idea is not only built into the workings of the Internet, it is its business model as well. The faster we surf the Web and the more links we click and pages we view, the more digital traces we leave behind; such traces can be collected and used to feed us tailored ads. The last thing data harvesting companies want is to encourage us to a slow and thoughtful use of the Internet. As Nicholas Carr adequately put it in his article “Is Google making us stupid?” [a3]:
“As we come to rely on computers to mediate our understanding of the world, it is our own intelligence that flattens into artificial intelligence.”
My artefact consists of an Instagram filter acting in the same medium it is criticizing, as I thought this would be the most effective – if not only – way to reach people. My filter’s purpose is anything but filtering. In fact, it is quite the opposite. With it, I want to shed a light on the condition of hyperreality that characterizes our postmodern era – utterly unknown to most of us obliviously living in it – and I intend to create an opening for insight and discussion. This form of critical technical practice is my intervention against the problematics I discuss.
I would have liked to link my research project to the filter, but due to Instagram’s restrictions (the swipe-up link is only granted to those who possess over ten thousand followers), for now, it will be limited to a visual and symbolic representation of the subject. Alas, yet another simulacrum in the myriad of signs and symbols that surround us. However, I plan to engage in dialog with my personal circle of acquaintances in the hope to increase the effectiveness and reach of my intervention.
A demo of the filter in use can be found below.
The condition of hyperreality first introduced by Baudrillard already in the end of the 20th century is more topical than ever. In the light of the unrelenting influence of technological progress on it, I have denominated it Hyper-ed hyperreality. My intent was to raise further awareness on the issue in the hope of encouraging a widespread action to subvert it.
a1. Pariser, E. (no date) Beware online ‘filter bubbles’. Available at: https://www.ted.com/talks/eli_pariser_beware_online_filter_bubbles
The work of Pariser was what brought me to shift the main focus of my research project specifically to the consequences of technological progress on the condition of hyperreality, rather than tackling merely hyperreality itself. His analysis of the filter bubble phenomenon is clear and supported by concrete examples.
a2. Tufekci, Z. (no date) We’re building a dystopia just to make people click on ads. Available at: https://www.ted.com/talks/zeynep_tufekci_we_re_building_a_dystopia_just_to_make_people_click_on_ads
Tufekci’s talk came as quite a shock for me. Her arguments on the hidden implications of AI and ML made me realize the gravity of a problem I had not fully grasped before. Extremely current and alarming, her work is something we should all be aware of.
a3. Carr, N. (2008) Is Google Making Us Stupid?, The Atlantic. Available at: https://www.theatlantic.com/magazine/archive/2008/07/is-google-making-us-stupid/306868/
In the past, I have already come across articles on the plasticity of the brain, but they all praised such feature of our minds as positive and useful for us. Carr, on the other hand, highlights how the malleability of our brain together with the frenetic rhythm of our online activity could lead to a flattening of our mental capabilities.
- ‘Episode 124 – Simulacra and Simulation’ (2018) Philosophize This!, 25 October. Available at: http://philosophizethis.org/simulacra-and-simulation/
- Chistyakov, D. (2016) ‘Social Dimension of Media Space in the Age of Postmodernity In the Context of Objective Knowledge Obtainment’, in Proceedings of the 2016 International Conference on Arts, Design and Contemporary Education. 2nd International Conference on Arts, Design and Contemporary Education, Moscow, Russia: Atlantis Press. doi: 10.2991/icadce-16.2016.67.
- Adams, R. (2018) ‘Did Baudrillard foretell the advent of fake news? From disinformation to hyperinformation’, Critical Legal Thinking, 27 April. Available at: https://criticallegalthinking.com/2018/04/27/did-baudrillard-foretell-the-advent-of-fake-news/
- Alves, N. (2018) Social Media and Disconnection, Medium. Available at: https://medium.com/energy-and-consciousness/social-media-and-disconnection-2c20095eccb8
- Contently (2016) How Facebook’s Filter Bubble Warped My Perception of Reality, Medium. Available at: https://medium.com/@contently/how-facebooks-filter-bubble-warped-my-perception-of-reality-687e57e5160e
- Fox Harrell — Virtual Identities (no date). Available at: https://futureofstorytelling.org/video/fox-harrell-virtual-identities
- Gwazi, D. (2017) Hyperreality, Media Personalization, Expectations, and Preferences, Medium. Available at: https://medium.com/@.WOKE/hyperreality-media-personalization-expectations-and-preferences-587b45a80899
- ‘Hyperreality as a Theme and Technique in the Film Truman Show’ (2018), p. 5.
- Kirkwood, M. (2019) ‘Hyperreality and the Consumption of the Subject as Object in “Black Mirror”’, Inquiries Journal, 11(10). Available at: http://www.inquiriesjournal.com/articles/1771/hyperreality-and-the-consumption-of-the-subject-as-object-in-black-mirror
- Mazzoleni, G. (ed.) (2015) The International Encyclopedia of Political Communication. 1st edn. Wiley. doi: 10.1002/9781118541555.
- ‘Soul-Sucking Photos Show How Phone Addiction Is Stealing Our Souls’ (no date) Bored Panda. Available at: https://www.boredpanda.com/screens-stealing-soul-social-media-sur-fake-antoine-geiger/
- ‘The Shallows’: This Is Your Brain Online : NPR (no date). Available at: https://www.npr.org/templates/story/story.php?storyId=127370598
- Tufekci, Z. (2015) How Facebook’s Algorithm Suppresses Content Diversity (Modestly) & How the Newsfeed Rules the Clicks, Medium. Available at: https://medium.com/message/how-facebook-s-algorithm-suppresses-content-diversity-modestly-how-the-newsfeed-rules-the-clicks-b5f8a4bb7bab