top of page

 this message was deleted 

Available articles:
1. Getting a lay of the web: an overview from web 1.0 to 3.0

ā€‹

Soon to be released:

2. Shaping the digital world: on autonomy and design (subject to change)

3. Why curation matters: the institute using the internet, a how-(not)-to (subject to change)

4. Online communities and web-cultures (subject to change)

5. Speculating on the format of online art. (subject to change)

Hi! You’ve encountered our research article in the wild. We’re glad you did as there’s a lot to talk about in our opinion. We’re all online - a lot - of the time. And some of us have started noticing some implications in relation to our daily life. The ‘world’ of the web is becoming inextricably linked to our actual physical world, which means that art too is becoming ‘webbed up’.

More and more we feel the necessity to be online which also goes for artists, organisations and institutions. But what does this ‘being online’ actually mean? How do we go about translating our work to the web succesfully? What should we take into consideration while doing so? And upon entering the web, how do we position ourselves among the myriad content streams, web-based cultures and platforms?

ā€‹

‘this message was deleted’ aims to inform you about topics relevant these questions with five articles releasing over the coming months, followed up with a roundtable podcast after the release of each article, where we invite guests to join the discussion. we’d love for you to become involved in the conversation as well by leaving comments, leaving voice-notes or by emailing us your thoughts at info@bigfishfactory.eu or instagram.com/thismsgwasdeleted. because beyond informing, we want to start a broader conversation about the rapidly evolving landscape of the web and its relation to our lives and art!

ā€‹

so whether you consider yourself a web-citizen 100%, have your reservations about online and digital culture or just want to know more — feel welcome, be invited to participate and join us in considering art on the internet.

Getting a lay of the web: an overview from Web 1.0 to 3.0

written by: Olli, Ida Schuften and Bart Bruinsma

20 February 2023

 A lay of the web 

When it comes to online culture and the structure of the internet as a whole, we’re in the middle of a big turning point. Issues concerning the current state of the web are starting to make its online users reconsider their engagement with the web, resulting in the development of new models of organizational structures and platforms, as well as new theories, tools and technologies to shape the internet’s future. The development of blockchain technology, augmented- and virtual reality, and the development of a spatial web, are all parts of this speculated future dubbed Web 3.0.

This article will focus on a brief walk-through of the history of the mainstream world wide web and what the basic characteristics of its many phases are, divided into a pre-history/technical explanation,
Web 1.0, Web 2.0, and lastly Web 3.0. This information should help us understand how the web might develop and what the internet's future might look like for institutions, creators/developers, and users/consumers.

 What is the internet literally 

In a nutshell, the internet is the decentralised distribution of information. This key element is embedded in the internet’s origin in the 1960’s. It originated as a response to the threat of nuclear attacks on US telecommunication infrastructure. At the time, US telecommunication was a centralised infrastructure with a single exit point for communication across the entire country, making it vulnerable to rely on in the event of nuclear attacks from the USSR. If this one point was targeted it would fail. In response to this threat, engineer Paul Baran divised a solution to create a decentralised network that would allow information to travel from multiple exit-points and thus be better protected against attacks, thereby ensuring secure and stable communication within the United States. Information was transmitted between different relay points, also called nodes or hubs, until it arrived at its desired destination, establishing a decentralized communication network. Using such a network, it was possible to break data into multiple smaller units and send them independently in a distributed manner over the net, which sped up transactions and resulted in not only a more secure, but also a faster communication infrastructure.

networks.jpg

 image: networks by Vilhjálmur Yngvi Hjálmarsson 

The development of the early internet is defined by an evolution from information being managed by a centralised entity with a single exit-point to a decentralised network of multiple exit-points that form an interconnected whole. This early version of the internet served as the foundation for what is called the ARPANET or Advanced Research Projects Agency Network. The ARPANET was a tool that allowed different computers to communicate with one another, and was mostly used by departments within the United States government as well as universities and research institutions. This has been regarded as a precursor to the World Wide Web (WWW), or the internet as we know it today.

The web as it’s structured today did not take shape until the 1980s and early 1990’s, when the first true web browser launched, revolutionising how web media was organised and navigated. Three elements in particular shaped the internet: HTTP, HTML and URL. The HTTP-communication language, or Hypertext Transfer Protocol transforms data from one machine to another, enabling communication between devices online. In daily use, we recognise this concept in the form of the blue hyperlink that allows us to portal through the internet.

Screenshot 2023-01-25 at 11.36.10.png

 image: hyperlinks on Google search 

Another element that catapulted the development of the internet was the creation of HTML , or HyperText Markup Language which is the code used to lay out the structure of a web page and its contents, making it easier for users to navigate the web and for creators and developers to design websites and share information. The URL, or Uniform Resource Locator specifies the location of a resource on the internet.

In short, the web is nothing more than a network of pipelines that link electronic messages and information between people. These pipelines and infrastructures are physically connected through cable networks installed in the ground and lying on the sea floor, creating the connections required to communicate between continents. The ocean sea has become subject to a large network of submarine cables that form one integrated system in the connection between worlds.

From the beginning of the web, the concept of connecting worlds and distributing information through decentralised means has been one of the internet’s greatest strenghts.

Untitleawdd.png

 image: the internet in the physical world, the sea-string. 

Different layers of the web

 The surface web and the web iceberg 

The modern internet is roughly seperated into two layers: the Surface Web and the Deep Web. When we refer to the internet in our daily lives, we usually refer to the Surface Web. This surface layer consists of websites that can be looked up by using a search engine such as Google or DuckDuckGo, and includes widely used, popular websites such as Facebook, Youtube, Google, Amazon and Wikipedia. Because of its ease of access and popular content, this is the most often used layer of the internet by the general public. Its content is organised and curated by the help of algorithms. Since access is free, the Surface Web monetises by integrating advertisements in the user experience, as well as collecting user data that's traded with commercial organisations. But the Surface Web is only the tip of the web-iceberg, as the great majority of data on the internet (96% of it) is on the Deep Web.

 image: the web iceberg by Vilhjálmur Yngvi Hjálmarsson - credit to New Models.

  Deep web  

The Deep Web consists of all of the information that is not indexed by search engines and are not easily accessible through the use of standard web browsers. This information varies from government records to social media profile information, military data, cloud data, academic data, legal documents, and medical records among other things. Various steps to restrict access to these websites are put in place, and usernames and passwords are frequently required to access it. Private internet-banking information, or your Netflix login information being good examples of this. To gain access to it, various steps to restrict access to these websites are put in place, and usernames and passwords are frequently required.

  Dark web  

A minor subset, or sub-layer, of the Deep Web is known as The Dark Web. These two terms are often confused with eachother. A special software/browser is required to access it, as well as a specific decryption key that allows the user to access the content on the encrypted website. For this purpose, the Tor browser, which lets users access the web without leaving any traces of personal information, has gained popularity.
 

It has served as a safe platform for activists and journalists to communicate safely and anonymously, and store sensitive information. But apart from these legitimate uses, the added layer of anonymity has also attracted illegal activities, such as i.e. buying and selling products such as weapons, drugs, personal information, and organs, as well as services such as hiring a hitman and sharing hacking tutorials between each other anonymously. One infamous example of such websites is Silk Road.

For more context material on the Silk Road, we recommend this article by COMPLEX.

Screenshot 2023-02-14 at 19.41.33.png
  Dark forest  

The term Dark Forest has been used to describe platforms that exist parallel to websites on the Surface Web. Examples are Patreon, Substrack and some Discord servers. The term Dark Forest was popularised by science fiction author Liu Cixin's book "The Dark Forest," in which humanity's open communication with possibly hostile aliens is called into question and caution is advised, much as animals in a forest must be cautious of predators. As opposed to mainstream Surface Web spaces, users of Dark Forest platforms ‘forage’ for content instead of having the content curated by the platform’s algorithm. These spaces are, if at all, minimally and straight-forwardly commercial which is a clear difference to Surface Web spaces. Dark Forest spaces are also more encapsulated in order to subvert influence of corporations or governments in service of protecting the privacy and/or security of its users. But as the director and writer Lil Internet from New Models put it, none of these spaces are perfect:

“A Dark Forest space could be home to a community of generative, well intentioned discussion and debate, or it could be a sort of filter-bubble on steroids.”

Because Dark Forest spaces are often closer in scale to real life human networks, these spaces are less sociologically stressful than the vast, open space of the Surface Web. Your contributions here are less exposed to scrutiny than on a platform such as Facebook or Twitter.

Screenshot 2023-02-14 at 19.53.40.png

 image credit: New Models

For more context material on different web-layers, we recommend checking out this video lecture by New Models.

Now that we know that there’s more to the web than meets the eye, and we have a general idea of how things are laid out, let’s take a look at where we came from.

Web 1.0 _ 1991 - 2003

 Cyberspace 

To get a sense of the structure of the web during its Web 1.0 stage, we’ll look at the text -
  A Declaration of the Independence of Cyberspace  by John Perry Barlow, published online on February 8, 1996. According to the text, cyberspace should be a realm apart from the one we live in. It should provide a new world without centralised governments, borders, or laws. Instead, cyberspace should be free to create its very own social contracts based solely on the Golden Rule:

"We are creating a world that all may enter without privilege or prejudice accorded by race, economic power, military force, or station of birth. We are creating a world where anyone, anywhere may express his or her beliefs, no matter how singular, without fear of being coerced into silence or conformity.”

It is interesting to note how vast the structural transformation of cyberspace has been since the text’s publication. At the time, cyberspace was thought of as a world seperate from the one we live in. Most people did not use their true identities online, instead creating a separate persona just for cyberspace. This is a significant difference between Web 1.0 and Web 2.0. Barlow himself is best known as a long-time member and lyricist of the psychedelic rock band Grateful Dead. This is significant because there has always been a strong relationship between 1960s American psychedelic counterculture and the early days of virtual-and cyberspace. As author Fred Turner puts it:

“In the mid-1990s, as first the Internet and then the World Wide Web swung into public view, talk of revolution filled the air. Politics, economics, the nature of the self—all seemed to teeter on the edge of transformation. The Internet was about to "flatten organizations, globalize society, decentralize control, and help harmonize people," as MIT's Nicholas Negroponte put it. The stodgy men in gray flannel suits who had so confidently roamed the corridors of industry would shortly disappear, and so too would the chains of command on which their authority depended. In their place, wrote Negroponte and dozens of others, the Internet would bring about the rise of a new "digital generation"—playful, self-sufficient, psychologically whole—and it would see that generation gather, like the Net itself, into collaborative networks of independent peers. States too would melt away, their citizens lured back from archaic party-based politics to the "natural" agora of the digitized marketplace. Even the individual self, so long trapped in the human body, would finally be free to step outside its fleshy confines, explore its authentic interests, and find others with whom it might achieve communion. Ubiquitous networked computing had arrived, and in its shiny array of interlinked devices, pundits, scholars, and investors alike saw the image of an ideal society: decentralized, egalitarian, harmonious, and free.”

 source: From Counterculture to Cyberculture 

What is described here is a societal binary of the counterculture subject vs the subjects upholding society's established institutions, embodied at the time by the flower-power hippy on the one hand, and the “stodgy men in gray flannel suits” on the other. Being online was perceived as a new counter-cultural movement, and most people who used the internet were those who were outside the norm, such as programmers, tech enthusiasts, hackers, and artists, all of whom saw a lot of potential in this new frontier.

 Web 1.0 sites and net-art 

In the Web 1.0 era, most sites were primarily text- or image-based, with no user interaction. The sites often had a single purpose and were organised in a hierarchical framework represented by the home page which contains hyperlinks that you pressed to go deeper into the website. To actually contribute material online, you needed to be relatively technically adept and know how to code, which made the web not very accessible to the majority of people, establishing the highly one-directional relationship between the content provider and the user. Because very few Web 1.0 websites were linked to a search engine, the decentralised and static architecture of the websites made it challenging to find information if you didn't know exactly what you were looking for. This is akin to wanting to call one of your friends, but not knowing their phone number as well as not having a phone book to look it up. Still, artists were quick to recognise the opportunities presented by the internet, and at this time, a movement known as net.art emerged, which used the aesthetics and medium of websites for artistic purposes. An anthology of the net art movement created by the Rhizome website may be found here. These sites serve as an excellent example of Web 1.0 sites.

For more context material on the Net-Art, we recommend visiting this website.

Screenshot 2023-01-24 at 16.44.56.png
  An impression of interacting with a Web 1.0 website.  
interacting with a web 1.0 net-art website
Play Video

Another good example to highlight the differences between Web 1.0 and our current internet are encyclopedia websites. On most of these websites you can look up a lot of information but you cannot contribute to it or change it. But on a website like Wikipedia, the website works in a dynamic way with users contributing information and shaping the website through their interactions.

This example brings us to the next phase: Web 2.0.

Web 2.0 _ 2004 - present

 How is 2.0 different? 

Web 2.0, a term coined in 2003-2004, is distinguished by a network of different sites that connect with one another. Graham Cormode and Balachander Krishnamurthy characterise it as follows in their paper Key Differences Between Web 1.0 and Web 2.0:

“Studies of Web1 highlighted a distinctive ‘bow–tie’ structure (Broder, et al., 2000), with three distinct pieces of a massive connected component. Individual sites typically adopted an approximately hierarchical structure, with a front page leading to various subpages, augmented by cross–links and search functions. Web2 sites are often more akin to real–world social networks (Milgram, 1967), which show somewhat different structures, due in part to implicit bi–directionality of links. There are some tattered remains of a bow–tie still visible (Kumar, et al., 2006). Studying a Web2 site in detail can be inherently harder than studying the Web1 ecosystem, since it requires crawling deep inside the particular Web2 site. Some sites enforce a very user–centric view of the site, meaning that each account can only see detailed information about explicit ‘friends’, in comparison to Web1 which is typically stateless.”

Websites from the Web 2.0 generation rely heavily on user interaction. The user can now comment and upload content on their own without having to re-code the website, giving the user direct influence over how the website's content is formed. This change has resulted in a significant democratisation of web content creation and distribution. The dynamic format also forces Web 2.0 sites to adapt to the wide range of material available online. Websites no longer serve a single purpose, but rather incorporate a variety of functions that keeps the user's interest and engagement. You can chat with your friends, organise events, watch videos, and get your news all in one location on a platform like Facebook. This is all centralised through your user profiles, which frequently require you to connect your physical identity to your virtual internet identity. You now input your birthday, name, location, and gender to your online profile. This in turn allows you to locate friends or groups to connect with, whether they be people you know in real life or groups and individuals you only know through the internet.

“A key difference in Web2 is that many sites encourage users to spend as much time as possible on their site. There are strong incentives for increasing such stickiness: opportunities for higher advertising revenue. Further, the more the users interact with the site, the more can be learnt about their interests and habits.”

asdasdasd.png

šŸ‘€

It is important to note that the decisions made by developers during the design process have a significant impact on how people interact with mainstream websites and online platforms.

 Algorithms, monetization and 'The Two Texts' 

The more time you spend on a website the more information it gathers about you. Most Web 2.0 sites generate revenue through advertising rather than subscription or service fees paid by customers. The services appear to be free, but in exchange, the site collects information about users, which it then uses to target advertisements or sells to third-party commercial companies. This is where algorithms become a highly integral element of the architecture of a website's content, essentially adding a monetary system. When we spend time on a website and interact with it, we leave traces of information that can be used to analyse our habits and serve us content that the algorithm believes we want to see, causing us to spend more time on the website. Shoshana Zuboff named this mechanism The Two Texts in her book Surveillance Capitalism.

“FIRST TEXT: When it comes to the first text, we are its authors and readers. This public- facing text is familiar and celebrated for the universe of information and connection it brings to our fingertips. Google Search codifies the informational content of the world wide web. Facebook's News Feed binds the network. Much of this public-facing text is composed of what we inscribe on its pages: our posts, blogs, videos, photos, conversations, music, stories, observations, "likes," tweets, and all the great massing hubbub of our lives captured and communicated.”

---

“SECOND TEXT: Under the regime of surveillance capitalism, however, the first text does not stand alone; it trails a shadow close behind. The first text, full of promise, actually functions as the supply operation for the second text: the shadow text Everything that we contribute to the first text, no matter how trivial or fleeting, becomes a target for surplus extraction. That surplus fills the pages of the second text. This one is hidden from our view: "read only" for surveillance capitalists. In this text our experience is dragooned as raw material to be accumulated and analyzed as means to others' market ends. The shadow text is a burgeoning accumulation of behavioral surplus and its analyses, and it says more about us than we can know about ourselves. Worse still, it becomes increasingly difficult, and perhaps impossible, to refrain from contributing to the shadow text.”

 source: Surveillance Capitalism 

This is what is known as cookies and it has been the subject of much debate in recent years. It is essentially your internet fingerprint that is collected and sifted through by a machine learning algorithm where the information is categorised and then used to curate- and generate targeted content for you. This collecting, retaining, and selling of information has culminated in a centralised network of big tech corporations that now control the majority of internet traffic. Amazon, Apple, Microsoft, Google, and Facebook/Meta are the most notable.

For more reading on tracking, we recommend this article by xiphcyber.com.

Screenshot 2023-01-24 at 17.09.46.png

In the journal entry Distributed Centralization: Web 2.0 as a Portal into Users’ Lives, Robert W Gehl writes:

“Cybernetic loops, anxious movements, random patterns, professional- and amateur-made content, multiple browser tabs, viral propogation: all of these mark the use of Web 2.0 sites. Online, users move fluidly from one Web site to another. A user who enjoys a Lolcat video on YouTube can “like” it, thus producing a post in his Facebook stream. That user might then change tabs back to Facebook to read comments his friends posted about the video. A particularly snarky comment by one friend is so funny, the user tweets it in Twitter via TweetDeck. While in Twitter, he sees a Tweet for a Wikipedia article on Lolcats, so he clicks on the link. After reading up on the history of Lolcats, he notices a reference in the Wikipedia article to a Colbert Report skit about politician’s cats, so he clicks on that and is sent to Hulu. He findsā€Æthatā€Ævideo so entertaining that he “likes” it, starting the process all over again.”

 Web capitalism and power 

Search engines have become an invaluable tool for sorting through the vast quantities of websites that may contain the information you want, and it's difficult to imagine navigating the internet without them. When we look up information today, we Google it. The company provides a highly convenient way to filter through the infinite number of websites that may contain the information you need, but the concern is that one company now controls what information appears at the top of this search engine.

Google, for example, is by far the most frequented search engine, with a market share among search engines of between 83-90% from 2015-2022. As a tech giant, the corporation wields significant power in determining which hyperlinks appear first on a list of content related to a user's search. This means that, despite the fact that the internet's content has gotten more democratic, people have lost their autonomy.

šŸ‘€

In September 2022, Alphabet, Google's parent firm, lost an appeal case challenging a European Union ruling that they were breaching competition regulations. (link)

For more reading on Cambridge Analytica, we recommend this article.

Screenshot 2023-01-24 at 17.22.37.png

Cambridge Analytica is a British political consultancy and data firm founded by right-wing donor Robert Mercer. According to leaked documents, the firm, which included former Trump aide Stephen K. Bannon on its board of directors, utilised data illegaly obtained from Facebook to develop voter profiles. This is considered to have had a significant impact on the outcome of the 2016 United States presidential election. According to the Times, Cambridge acquired data from over 50 million Facebook users. However, at the bottom of a company announcement on new privacy features, Facebook's chief technical officer, Mike Schroepfer, offered a new estimate for the number of people affected: up to 87 million, the majority of whom were in the United States. The Times also revealed information about the Facebook app used to harvest data for Cambridge Analytica. Many people assumed it was a simple Facebook quiz. Rather, it was tied to a lengthy psychology questionnaire hosted by Qualtrics, an internet survey management organisation. For those filling out the questionnaire, the first step was to grant Qualtrics access to their Facebook profiles. When they did, an application harvested personal data as well as the data of their friends. This data was used to display targeted, customised ads about Donald Trump to different US voters on various digital platforms. These ads were segmented into different categories dependent on if the person was a Trump supporter or a potential ‘swing voter’. These ads would either target positive news and ads about Trump or the opposite show negative images and news about his opponent Hillary Clinton.

The Cambridge Analytica scandal directly called into question the Web 2.0 architecture. Former Cambridge Analytica employee Christopher Wyilie leaked documents revealing how Cambridge Analytica acquired the data of millions of Facebook users without their consent and used it to inform a strategy for targeted political advertising. They exploited the data to identify people who were more prone to impulsive anger or conspiratorial thinking than the average citizen. Illegally obtained user information was used to target specific users during the Brexit campaign taking place in the same timeframe. Furthermore, because the Web 2.0 monetary system relies on harvesting user data, the website wants to keep the user interacting with the website for as long as possible, and thus the content that the algorithm chooses tends to be content that reaffirms rather than challenges the person's own opinions and experiences. This has frequently resulted in what are called echo chambers, an environment where a person only encounters information or opinions that reflect and reinforce their own. These echo chambers in turn contribute to the polarisation that we have seen in popular politics in recent years.

šŸ‘€

See Wall Street Journal’s Blue Feed, Red Feed for a side by side of two different political “bubbles”. (link)

 Shadow banning 

In essence, the Web 2.0 internet has developed into a Rhizome of never-ending content, a constant noise. The algorithm selects which content is heard/seen through the noise, a process comparable to a curational process. This same algorithm can also decide which content is not shown to users as well as specify which information should be kept from emerging to the top. Some claim that certain content is, deliberately or not, pushed down by the algorithm, which means that certain content can be concealed even when a person searches for it. This is process is referred to as shadow banning.

For more reading on shadow-banning, we recommend this article by AQNB.

Screenshot 2023-02-20 at 14.16.14.png

Currently, mainstream algorithms promote rapid and easily digestible information, making it difficult for atypical or experimental content to gain popularity. This has been particularly observed by artists. Consider the Spotify audio streaming platform. If Spotify is used as an example, it is evidently quite simple to get your music on the platform via a service like CDbaby or Distrokid so that almost the entire world can access it, but being heard through the endless stream of new music and historical catalogues is all in the hands of the Spotify algorithm, and musicians have almost no say in how this algorithm behaves. As users, we have virtually zero control over the sites to which we publish our content. Instead, we work for the platforms, which profit from both the content we produce and the content we consume.

 Dark forest, revisited 

As a result of the realities of the internet's current Web 2.0 phase, many individuals are reconsidering their internet habits. The current platforms provide an often unhealthy, and at times even dangerous milieu for users. As a result, alternative platforms and efforts aimed at properly serving the interests of its users, artists and creators, are slowly emerging. A movement is underway to develop alternatives to the established system outlined above, such as internet communities that help people, artists, and creators retain their autonomy. Large groups of users are now looking for new platforms to use for both everyday work and as a source of content creation. Some have argued that platforms founded on these principles could be a valuable starting point for cultural innovators online. These spaces are often dubbed Web 2.5. But what about Web 3.0?

 image: overview of web phases by Vilhjálmur Yngvi Hjálmarsson 

Web 3.0 _ ???? - ????

 Defining a speculation 

To begin speculating on Web 3.0, let's begin by going over the developments that we have witnessed so far:

ā€‹

  1. In the pre-Web 1.0 phase, decentralised networks are developed in order to more securely transfer information. As a result, a cyber subculture of primarily tech developers, artist hackers and cultural theorists arose, all of whom were surfing this newly established World Wide Web and contributing to the creation of a new world that had yet to be defined.

  2. In the Web 2.0 phase, we see companies began developing methods to make internet surfing simpler. Firms such as Google provide services that they subsequently monetize through advertising, as well as collecting and selling information on their users. This resulted in the establishment of a network of a few distinct organisations and websites, who now have centralised control over the great majority of the web and can regulate what information gets more attention through the use of algorithms. This new paradigm has received increased criticism in recent years, and more individuals are calling for the web to return to its original decentralised state.

For more reading on the changing web, we recommend this article by WIRED.

Screenshot 2023-02-06 at 18.58.38.png

Web 3.0 is what is imagined as this new state of near-future web infrastructures. Shaveta Wadhwa defines Web 3.0 in her paper What Is Web 3.0 and Why Does It Matter? as follows:

“The idea behind Web 3.0 is to create an internet that accurately interprets your input, understands what you convey, and allows complete control over what type of content you want to consume.” “Web 3.0 is the next generation of the Internet. It is secure, decentralized, and free from the control of the Internet and social media organizations. This version of the Internet is based on blockchain technology and operates on token-based economics.”

 source: What is Web 3.0 and why does it matter?

Among the most prominent predictions on what will revolutionise our internet habits are the development of blockchain technology, the development of virtual and augmented reality technology, and the consequent breakdown and blending of the online, virtual world with our own physical world leading to a Spatial Web.

“The Spatial Web weaves together all of the digital and physical strands of our future world into the fabric of a new universe where next-gen computing technologies generate a unified reality; where our digital and physical lives become one. This is a new kind of network, not merely one of interconnected computers like the original Internet or a network of interlinked pages, text, and media like the World Wide Web but rather a “living network” made up of the interconnections between people, places, and things, their virtual counterparts, and the interactions, transactions and transportation between them.”

”It enables the current information on the web to be placed spatially and contextually on objects and at locations, where we can interact with information in the most natural and intuitive ways, by merely looking, speaking, gesturing, or even thinking. But it also enables the digital to be more physical as sensors and robotics become embedded into our environments and onto the objects around us. It makes our world smarter as it adds intelligence and context to any place, any object, and every person that we encounter, and it allows our relationships with each other and this new network to be more trustworthy, more secure, and faster by decentralizing and distributing the computing and storage of the data.”

 source: Gabriel Rene - An Introduction to the Spatial Web

Screenshot 2023-02-16 at 18.37.20.png

While the Spatial Web is an important part of the Web 3.0 debate—in fact, it is the entire encapsulation of the term Web 3.0—this text will mostly focus on Blockchain. Because the principle of Blockchain is at the foundation of the infrastructures that will allow the Spatial Web to be implemented, and implementation of Blockchain is happening today, while more extensive implementation of the Spatial Web in our actual world will take more time and is left for a later future.

asfasfsdgsfv.png

image credit: Gabriel Rene - An Introduction to the Spatial Web

image: Bladerunner 2049

 Blockchain 

An anonymous person using the moniker Satoshi Nakamoto first proposed the concept of blockchain technology in the Bitcoin white paper in 2008. It laid the groundwork for the Bitcoin electronic cash system and, as a result, the Blockchain. The idea was to develop a currency system with decentralised control disseminated across the internet rather than one controlled by a centralised system such as a bank or government. It has been speculated that the development of this blockchain technology might very well help to facilitate the need for a safer and even decentralised web. As an example: imagine safely storing your personal information on a blockchain, thereby eliminating the need for a company or third party to control it and allowing you to regain control, or even ownership over it.

šŸ‘€

Blockchain is like a digital ledger that keeps track of all transactions in a safe and secure way. Each transaction is added to the ledger as a block and all the blocks are linked together to form a chain. This chain cannot be easily changed or altered, making the information stored in it very trustworthy. Blockchain technology is used for things like virtual money, smart contracts, and following things as they move through a supply chain. And because the ledger is spread out across many computers, there's no need for any one person or company to control it.

Developments such as these are gaining serious mainstream traction. For example: in 2021, the European Union announced a research programme dedicated to these themes as part of the
Horizon Europe research and innovation action. They write:

“The Internet architecture has developed as a mix of centralised, networked and device-based technologies with design choices largely coming from the past. In particular, the questions of security and energy efficiency were relatively secondary in the initial architecture design of the Internet. At the same time, ever-larger fractions of the internet as we know it today are operated by a small number of platforms controlling end-users’ data, online transactions and infrastructure, effectively leading to a concentration and centralisation of the Internet. Proposals should focus on advancing the state-of-the-art in one of the two research areas below:

1. To review and upgrade the open Internet architecture (hardware, software, protocols) to increase the performance of the network, adapt it to new application requirements, improve quality of service, make it more resilient to security threats, more energy efficient and respectful of the environment (e.g. reparability, recyclability), and increasingly supportive of open and decentralized technologies and services.

2. Address the current limitations of decentralized technologies, such as Blockchain and DLT, including those related to scalability, interoperability, energy efficiency, privacy or security, in order to make them dependable building blocks of the future Internet…”

“As the Internet can be seen as a means for sharing information, so blockchain technologies can be seen as a way to introduce the next level: blockchain allows the possibility of sharing value.”

 Ownership and value in the digital realm 

The preface of Massimo Ragnedda and Giuseppe Destefanis' book Blockchain and Web 3.0: Social, Economic, and Technological Challenges reads:

This points to one of the key problems of today’s internet: How do we, in the digital realm, establish ownership and create value? Since digital product is easily copied, many (primarily creative) industries such as music and film, have seen the economic value of their art work significantly reduced. Consider how simple it is to replicate a file on your computer. You could, for example, purchase an album from Bandcamp.com, download it, duplicate it, and then share it with your friends at no cost, so directly affecting the music's value. The blockchain creates a setting in which digital information cannot be easily replicated and therefore the value remains constant. This translates into something resembling real ownership of digital content. As each item on the blockchain is timestamped and cannot be modified afterwards, it is predicted that this could aid both in conserving digital content and in registering and proving copyright of a digital work.
 

But we should not jump to conclusions about Blockchain being the rock solid future of the internet. The new technology has been met with a slew of criticism, the most prominent of which is its harmful impact on the environment. In the article  Bitcoin's Impacts on Climate and the Environment , the blockchain industry is responsible for an excess of 27.4 million tonnes of carbon dioxide (CO2) between mid-2021 and 2022, which is three times the amount emitted by the largest coal plant in the United States in 2021. Bitcoin is thought to consume 707 kwH per transaction. Additionally, computers consume extra energy since they generate heat and must be kept cool. While it is impossible to tell how much electricity Bitcoin requires since different processors and cooling systems have varying levels of energy efficiency, a University of Cambridge investigation indicated that Bitcoin mining consumes 121.36 terawatt hours per year. This is more than the entire country of Argentina consumes per year, or more than Google, Apple, Facebook, and Microsoft's yearly consumption combined. Since Blockchain technology is still in its early stages, we should be aware of its pitfalls and benefits.

Perhaps more importantly than the concept of blockchain itself, there is a growing awareness that the infrastructures that dominate the internet today have serious flaws that must be addressed. According to a Gallup poll conducted in 2021, only 34% of Americans believed that the big tech companies behind these platforms had a positive impact on society*. The emergence of Web 2.0 brought with it a lot of promise in terms of user interaction with websites, user creation and distribution of content, user access to massive amounts of information, and convenient and simplified ways of navigating the web. But have these new platforms developed to become increasingly problematic?

For more reading on big-tech, we recommend this article by GALLUP.

Screenshot 2023-02-15 at 15.45.06.png

Ben Tarnoff, a tech worker and co-founder of Logic Magazine, wrote an article titled The Internet Is Broken. How Do We Fix It? Tarnoff suggests in this article that the root of the problem is that, while the issues are diverse and complex, they are inextricably linked to the fact that the internet is owned by private companies and run for profit. He writes:

“MAREA (a 6,600 km (4,000 mile) long transatlantic communications cable connecting the United States with Spain. Owned and funded by Microsoft and Meta Platforms) and other cables are, to borrow a metaphor from the Uruguayan journalist Eduardo Galeano, like veins in a mine. Through them, wealth is extracted and communities are dominated.”

source: The internet is broken. How do we fix it?

To build a better internet, it seems we must change the way it is owned and organised, so that the platforms work for us rather than us working for the platforms. Some steps have been taken to create social networks based on collective control rather than centralised business. Tarnoff cites programmers such as Darius Kazemi, author of a step-by-step guide to launching a small-scale social media site, and platforms such as Mastadon, an open-source software project that allows people to run their own social media servers and connect them into a federation. Users have more control over who sees their personal information on platforms like this, as well as a more democratic say in how the servers that host the platform itself are run.

 In conclusion 

The internet as we know it today will change rapidly, as many of the concepts outlined in the text above are already manifesting themselves in actual ways that you can use today. Although corporations now control and curate the web, alternatives based on decentralised principles are becoming more available. The need for these alternative spaces and platforms will increase as the internet and its user base expand. This effort is already well underway and will benefit from the backing of a contemporary, cyber-counterculture. A role that is well suited to be taken on (in part) by artists and cultural workers today.

 Let us know your thoughts! 

Comments

Compartilhe sua opiniĆ£oSeja o primeiro a escrever um comentĆ”rio.
bottom of page