A look into the past

A very close friend of mine currently works for the State Library of Berlin, where she is working for a project called Handschriftenportal. It‘s a digital repository of European book manuscripts in German collections, providing a platform for researchers interested in knowledge preserved in medieval and early modern handwritten books. Since then, I‘ve got a lot of pictures of the weirdest (cute!) little illustrations that my friend has found over time—they are marginalia. A year ago, I would have never guessed that there would be a connection between my degree and the screenshots of marginalia from the Middle Ages on my phone. We have not talked about this weird coincidence yet, but I asked her if her more research-focused colleagues have some articles on marginalia they could recommend. They recommended a recent article by Zoe Screti Finding the Marginal in Marginalia, which advocates for the inclusion of recording descriptions of marginalia in catalog entries to empower relevant yet still marginalized voices in the margins.17 Screti sums up marginalia's history and recent discourse before advocating for a shift in the current cataloging practice. 18 Screti defines marginalia as follows:

Today, as then, marginalia is used to describe any form of markings (both visual and textual, printed and handwritten) made on a text that are usually—but not always—located in the margins of the said text. 19

17 Screti, Zoe. “Finding the Marginal in Marginalia: The Importance of Including Marginalia Descriptions in Catalog Entries.” Collections: A Journal for Museum and Archives Professionals 20, no. 1 (March 2024): 122–41. https://doi.org/10.1177/15501906231220976, 122.

18 I can really recommend this article as a further read. The main topic of the article is a deeper look into all of the marginalized groups of people that have a voice in the margins, but are ignored by the current practices of digitizing and catalogued books.

19 Screti, Zoe. “Finding the Marginal in Marginalia: The Importance of Including Marginalia Descriptions in Catalog Entries.” Collections: A Journal for Museum and Archives Professionals 20, no. 1 (March 2024): 122–41. https://doi.org/10.1177/15501906231220976, 124.

I‘m wondering if I should have called my thesis code paratextual materials, this just rolls off the tongue.

The word traces its history back to 1640, when it was used for the first time to refer to printed marginal notes that provided additional context to the main text. But the practice of writing in books is much older than the term—before marginalia, it was apostille—deriving from the Latin word postilla for note. Screti notes that marking a book is an active response to what a reader has read. In that way, marginalia leaves a trail of personal thoughts and reactions behind, a document of the dialog between the reader, the author, and the text. For researchers, they are an invaluable resource providing glimpses into the readers and their socio-cultural contexts.20 This approach of understanding the socio-cultural contexts through annotations can be traced to the digital with Source Code Criticism, as it employs comparable approaches. Bajohr and Krajewski trace the function of the modern comment through time back to commenting in the legal practice since late antiquity and the theological exegesis—the explanation and interpretation of a text, especially the Bible.21 At least for exegesis, this happened—as far as I understood it, in the margins and was also transferred by scribes into later copies of the books. Coming from the digital, it‘s also interesting to read about the shift to the digital in the field of marginalia research. Screti discusses the digital representation of marginalia, mentioning the slow paradigm shift from using the term marginalia to paratextual material, which encompasses the full range of marginal notation. Screti also refers to Tatiana Nikolaeva Nikolova-Houston‘s classification of marginalia within hypertext theory and her comparison of the function of marginalia with hyperlinks on websites.22 Even though Nikolova-Houston‘s view connects more to hyperlinks than comments, it‘s interesting that there is this overlap in marginalia and HTML. Helen J. Burgess, a media studies researcher, connects marginalia and printmaking to HTML and comments.23 Burgess connects the pecia system, which was used in the process of hand copying medieval books24—and the marks the process leaves as marginalia—to HTML marking articles of content and comments.

20 Screti, Zoe. “Finding the Marginal in Marginalia: The Importance of Including Marginalia Descriptions in Catalog Entries.” Collections: A Journal for Museum and Archives Professionals 20, no. 1 (March 2024): 122–41. https://doi.org/10.1177/15501906231220976, 122–4.

21 Bajohr, Hannes, and Markus Krajewski, eds. Quellcodekritik: zur Philologie von Algorithmen. Erste Auflage. August Akademie. Berlin: August Verlag, 2024, 81-3.

22 Screti, Zoe. “Finding the Marginal in Marginalia: The Importance of Including Marginalia Descriptions in Catalog Entries.” Collections: A Journal for Museum and Archives Professionals 20, no. 1 (March 2024): 122–41. https://doi.org/10.1177/15501906231220976, 124.

23 Dilger, Bradley J., and Jeff Rice, eds. From A to <A>: Keywords of Markup. Minneapolis: University of Minnesota Press, 2010.

24 The pecia system was a system for efficiently reproducing handwritten in the Middle Ages. Copy texts, so-called exemplars, were loaned out in parts—in piecemeal or Latin pecia, to be copied and then returned. This preserved the quality of the copied books by limiting errors. With this system, multiple workshops worked on one copy of a book, which meant there was a need for communication between all the different people. Pecia marks were the solution to this problem, as they marked where the different pieces of hand-copied text (pecia) would be placed in the new copy of the book. In: Dilger, Bradley J., and Jeff Rice, eds. From A to <A>: Keywords of Markup. Minneapolis: University of Minnesota Press, 2010, 170–1.

When I look around both physically and digitally, I‘m surrounded by comments and notes. Currently, I’m writing at a desk in my living room—it‘s close to 8pm. I’m nearly done for today, and I am fried. On my left is a jumble of notes—random thoughts on paper while I’m not at my computer; a Kaufland receipt with notes from one of my flatmates, who took the time to listen to me while I had my first (hopefully last) meltdown about this thesis and later helped me with structuring; a few mind maps trying to conjure a semblance of a common thread through my thoughts into this text; lastly two Kimchi recipes. In my writing environment, a few sources are currently open as tabs, and my random thoughts about the process of writing this text, how the website and the print version will look, are going either into this text itself, my documents called thoughts and design & writing.

For her, comments share formal similarities to pecia marks, as they are both marking articles and spaces for content. Same as Bajohr and Krajewski, Burges draws a parallel between comments and exegesis. For her, these aspects of explanation and interpretation are the key differences to pecia marks. Comments as literary devices seek to explain what will happen when code is performed—they are a companion to code and a tool for communication with other people.25 In this sense, she aligns with the values of Literate Programming. All these perspectives reaffirm to me a deep connection between comments and marginalia. On the one hand, both have similar functions in the field of annotations—especially if one considers commentaries as paratextual material, where I understand the textual as code. On the other hand, through the connection between comments, in this case as a form of literary criticism - not code comments, as seen in exegesis and law, marginalia, in which most of them take place,26 and the modern comment - the one in code - which can trace its history back to literary comment. When I look at the relation between this commentary in the margins and the text it is commenting on, it gives me such Literate Programming vibes.

Going back to Bajohr and Krajewski and their short summary of the history of the comment, they also shine a light on comments in editorial philology27 to review the comment as a tool for reflection through the lens of Source Code Criticism. They emphasize the cognitive gap between text and comment and the automatic change of perspective one can get from crossing this gap. This continuous change of perspective creates a distance to the subject matter and serves as a means of self-reflection and critical examination of the writing process.28 I think this mechanism also applies outside of Source Code Criticism. I often feel like this while writing comments to myself while coding or, in this case, writing. When coding, I tend to write what I want the function to do as a comment before and iterate between code and text while coding and adding or editing comments and notes. Sometimes, I also write paper notes because I feel that the motion of handwriting helps me more than typing the same thought—or it won‘t come out unless I‘m writing by hand.

25 Dilger, Bradley J., and Jeff Rice, eds. From A to <A>: Keywords of Markup. Minneapolis: University of Minnesota Press, 2010, 184-5

26 Department of Medieval Art and the Cloisters. “The Art of the Book in the Middle Ages,” n.d. http://www.metmuseum.org/toah/hd/book/hd_book.htm.

27 Editorial philology tries to find out more about the origin and structure of texts. Comments clarify ambiguous or unclear passages or information in a text. They highlight deviations or deletions of text between copies and the original to make all the edits transparent. This feels close to commenting out code and writing a note, but let‘s be honest, it‘s clearly git and version control, right?!

28 Bajohr, Hannes, and Markus Krajewski, eds. Quellcodekritik: zur Philologie von Algorithmen. Erste Auflage. August Akademie. Berlin: August Verlag, 2024, 81-3.

About your modern incarnation,
as the comment in code

Mixing and matching whatever medium to record a thought or leave a note feels close to how I perceive the transition from handwritten comments for code to digital comments in code. I was vaguely aware of punch cards, but with how coding works now, the early era of computing feels so far away. The first general-purpose electronic computer was the ENIAC. The purpose and associated development of this computer were ballistic calculations for the US military - it was therefore used to calculate the trajectories of artillery and missiles so that they would not miss their targets. Like many technical inventions that we rely on today, the computer too began as a project of the military-industrial complex.

I‘ve been thinking a lot about how to structure this part—or more like how visible I want the perceived role of women in part of history and the origins of modern computing as a military project. I wrote a few different versions, mentioning them as a comment and later a footnote, but I think it‘s good to already have this information in the main text—even though I will have a deeper dive into this topic in the following chapter.

The ENIAC was operated with knobs, plugboards, and an external memory with punch cards. The program had to be written on a piece of paper and then translated into binary for the machine to understand and input into the different parts of the computer by turning the right knobs, plugging cables into the plugboards, and ingesting punch cards.29 These crucial and complicated steps for operating this computer were performed by women—called the ENIAC Girls, who were forgotten in history for a long time. They were mostly regarded as nothing more than operators—clerks akin to switchboard operators—by their male colleagues.30

In this era of handwritten or typewritten code, I wonder how many notes and comments were written down on paper—part code, part notes, and scribbles. I imagine little reminders and drawings for remembering the specific quirks of this machine, notes to self, and notes to others. Scribbles, crossed-out errors, or rewritten letters and numbers because they were not readable enough. Notes to help with figuring out how a specific program should work and notes for the conversion to binary. Notes and markings on specific punch cards and maybe even comments on the final versions of binary instructions.31 All of this happens in the margins of the instructions fed to ENIAC to perform code. I think that was a time when comments could be considered marginalia in the classical sense.32

29 University of Pennsylvania. “Celebrating Penn Engineering History: ENIAC.” Accessed April 7, 2025. https://www.seas.upenn.edu/about/history-heritage/eniac/.

30 Ensmenger, Nathan. The Computer Boys Take over: Computers, Programmers, and the Politics of Technical Expertise. History of Computing. Cambridge (Mass.): MIT Press, 2010, 14-5.

31 These two posts about writing code and comments made me daydream about stacks of notes, loose papers, and quick scribbles. One about comments in early programming and the other about rewriting an old FORTRAN program from punch cards in Visual Basic.

32 Maybe there will at some point be a time in my life when I visit archives to look at those code marginalia.

It was so hard to grasp how early computing with knobs, switchboards, and punchcards worked. In the beginning, I had no idea how code would be transferred to something like a punchcard or tape because all the literature I read assumed knowledge about this process. So, I turned to Science YouTube for help and was rewarded. I’m such a sucker for good science communication. After binging a lot of different YouTube videos and taking notes, I’m glad I finally have a rough overview of the history of computing - even if it’s just through the lens of comments. At the beginning of my research, I focused more on the social aspect of coding. Still, I realized that I really have to get to know the basics of the technical aspect of these early computing technologies to understand how this transition from paper to digital came and how comments transitioned in this period.

Coding was very tedious and challenging in this era of computing. Over the coming decades, new approaches and ways of programming were found step by step, which also transformed the form of comment. With the change from binary machine language to the first level of abstraction assembly language in the 1950s and the subsequent change from low-level programming languages to high-level programming languages such as FORTRAN33 and COBOL34 in the 1960s—the comment stayed a companion.

I present to you a brief summary of the history of computers from the perspective of the comments. Throughout researching this transition period35 and getting to know all these different technologies, I’ve noticed that even though the hardware changes over time, the programming languages and their features are always the ones that are supported on the devices. So, the focus through all these iterations of computers and storage media lies on the language of code and the human element, not necessarily the device. Nevertheless, the devices are still important in understanding this transformation process, as these devices determine the medium on which writing takes place—initially analog and later digital. In the 1950s—the era of ENIAC and its successor devices—computers used different media for storing and loading programs or code. These were knobs and switches, plugboards with cables, paper tape as well as punch cards. The basic workflow from coder to computer was the following:

    ENIAC Program Development Sheet
  1. Write down your basic idea of what the code should do by hand.

  2. Next, you write the actual code on a coding form. A piece of paper that is comparable to a code editor today. The code would have been in Assembly or, later, a high-level programming language. If you write code for the ENIAC, you will have to translate the code into Machine Language by hand.

  3. Assember Code Form
    EDSAC programme sheet
  4. Next, you transfer your code to a memory medium the computer accepts. These could have been a series of knobs, where you have to switch them on and represent the binary code. Another way would have been a plugboard, where you connect outputs to each other with cables. The third way was punch cards or paper tape. With them, you could type your code into a dedicated machine that would punch the paper holes accordingly. In the case of the punch card, one card represents one line of code.

  5. Run the program.

  6. and corresponding paper tape instruction strip

33 Chandra, Vikram. Geek Sublime: The Beauty of Code, the Code of Beauty. Minneapolis, Minnesota: Graywolf Press, 2014, 40–1.

34 Chandra, Vikram. Geek Sublime: The Beauty of Code, the Code of Beauty. Minneapolis, Minnesota: Graywolf Press, 2014, 92.

35 These two videos from PBS Crash Course summarize the early history of computers as well as programming languages. CrashCourse, “Early Programming: Crash Course Computer Science #10,” May 3, 2017, accessed April 7, 2025, https://www.youtube.com/watch?v=nwDq4adJwzM; and CrashCourse, “The First Programming Languages: Crash Course Computer Science #11,” May 10, 2017, accessed April 7, 2025, https://www.youtube.com/watch?v=RU1u-js7db8.

This process did not change much between the different programming languages. While writing code, there was always space for comments, notes, or scribbles on the paper—it did not depend on the language, and all code was prewritten on coding forms. The early high-level languages like FORTRAN36 and COBOL37 had the option to also write comments directly in the code by marking the line with the right symbol. These could also be encoded onto the punch cards, with the additional option of commenting out any line of code afterward by punching out a certain field on the card. The whole punch card would be treated as a comment—irrelevant to the machine but still an integral part of the order of the punch card stack. In a way, the comment could be transferred from the margins of the coding form to a new state. It would be embedded in the code's main text but still retain its status as an outsider, as it would not be seen as relevant for part of the code's readership - the machine.38

36 GNU. “28.14.3 Fortran Comments.” Accessed April 7, 2025. https://www.gnu.org/software/emacs/manual/html_node/emacs/Fortran-Comments.html.

37 mainframestechhelp. “Cobol Tutorial—Comment Line.” Accessed April 7, 2025. https://www.mainframestechhelp.com/tutorials/cobol/comment-line.htm#google_vignette.

38 With this in mind, I‘m wondering if everything that was printed on a punch card—be it the numbering, logos, or, in general, anything that was not the punched hole, could be considered something akin to comment or marginalia from the perspective of the machine.

When I read Miriam Humm‘s thesis on Hypertext while researching my own, I found out about the Advent of Computing Podcast, which she linked as a playlist. I have listened to many different episodes since then and can really recommend it if you are interested in these topics after reading about them here.

Here is a Playlist of all the Episodes that are interesting for the topics this text touches on.

ENIAC Part I & Part II
Assembly
FORTRAN
COBOL
C Part I & Part II
Story of Mel
Esoteric Languages

What was also really helpful in understanding the history of early computing was this video series about Computer Science by PBS Crash Course on YouTube. A lot is also more about programming and many other topics, but a few videos touch on the history and basics of many aspects of the technology we use today.

A punch card is one line of code
Software A stack of punch cards

With most of the following high-level programming languages evolving from those FORTRAN and COBOL39, the function to comment in the code itself also was part of everything that followed.40 The next era of computers would move one from punch cards to computers with actual operating systems.

The hardware changed, but the possibility of commenting on the code remained. You were able to comment directly in the terminal on the computer itself. With these new computers and their operating systems, multiple people could share one computer through many terminals—single access points with a screen and a keyboard—and use them to run their programs. 41 One example of these early operating systems is Multics42. The operating system could run programs directly from a terminal in many programming languages.43 Later, with the development of Operating Systems like Unix, the computer retained most of the functions, but the computer and the terminal were the same. So, a computer with one user—a personal computer—brings us to the contemporary comment.44

39 Éric Lévénez. “Computer Languages History.” Accessed April 7, 2025. https://www.levenez.com/lang/.

40 Davids Kacs. “Comments in Different Programming Languages.” Accessed April 7, 2025. https://gist.github.com/dk949/88b2652284234f723decaeb84db2576c.

41 CrashCourse, “Operating Systems: Crash Course Computer Science #18,” June 28, 2017, accessed April 7, 2025, https://www.youtube.com/watch?v=RU1u-js7db8.

42 Multics. “Multics History.” Accessed April 7, 2025. https://www.multicians.org/general.html.

43 Multics. “Multics Features.” Accessed April 7, 2025. https://www.multicians.org/features.html.

44 rashCourse, “Operating Systems: Crash Course Computer Science #18,” June 28, 2017, accessed April 7, 2025, https://www.youtube.com/watch?v=RU1u-js7db8.

A search for bad vibes

However, this digression into the history of the comment and, hence, also into the history of programming has largely ignored a crucial aspect. The consideration of the social aspect and the social practices that have developed over the decades—a history of what I perceived as bad vibes. And also a history linked to why the comment is so important to me. One of my main sources for understanding the social aspects of the last 80 years of computing history is Nathan Ensmenger’s book The Computer Boys Take Over, in which he examines the rise of the computer expert in American Society over the last century. However, the questionable history of technology as a male narrative goes back much further.45

45 Later, after reading Judy Wajcman’s TechnoCapitalism Meets TechnoFeminism, I learned about Ruth Oldenziel’s Making Technology Masculine, which presents the historical process of transforming technology as something inherently masculine during the transition from the 19th to the 20th century.

Going back to the setting of the 1940s, when computing was getting started, women were mostly relegated to clerical positions like human data processing or execution, with men being in charge of the research and development of this new technology. It wasn’t any different with the ENIAC. The women who worked on this project set up the machines and should simply implement the instructions of the male planners. They were at the bottom of the hierarchy of intellectual and professional status. Yet the process of translating the instructions into machine code and inputting them into the machine turned out to be not a simple operation. It required innovative thinking and new approaches developed by the female coders. What was thought to be a straightforward process of coding an algorithm turned out to be a multi-layered process of analysis, planning, testing, and troubleshooting. They had to fight for the acknowledgment of their insights and improvements in computing. The so-called ENIAC Girls were unknown and not recognized for their contributions for a long time.46 These and later many other women in similar positions were a crucial part of this part of computing history, with many becoming programmers. This left the programming field in a somewhat uncertain position regarding the social status connected to the works. Both men, who came from high-status research jobs, and women from low-status clerical positions became programmers. At the same time, the field began the professionalization process to become a high-status discipline. Until the 50s and 60s, the programming profession was still relatively open to women compared to other occupations in the same period, but the advancing professionalization changed this.47 This professionalization is a central aspect of the masculinization of programming. Ensmenger refers to Margaret Rossiter and others who have suggested that the process of professionalization is deeply rooted in the exclusion of women.

46 Ensmenger, Nathan. The Computer Boys Take over: Computers, Programmers, and the Politics of Technical Expertise. History of Computing. Cambridge (Mass.): MIT Press, 2010, 14-5.

47 Ensmenger, Nathan. The Computer Boys Take over: Computers, Programmers, and the Politics of Technical Expertise. History of Computing. Cambridge (Mass.): MIT Press, 2010, 243.

To improve the status of the disciple, the often feminized aspects of low-status work had to be excluded from the profession. In addition, a prerequisite level of higher education makes access to the profession more difficult. This made it disproportionally hard for women to enter the profession—especially when also encumbered by the role of mother and or wife.48 Professionalism also implies a level of authority and expertise—qualities that haven’t been associated with femininity.49 Ensmenger attributes pressure for increasing professionalization to the following key reasons. The software industry of the 1960s was in great need of programmers as the computerization of business was rapidly expanding. With a shortage of programmers, they started to become highly sought-after workers with an increased status in their jobs.50 This, however, collided with the managerial class of workers, as they felt threatened in their positions. In the eyes of managers and office workers, unruly and unpredictable programmers had to struggle to find their place in the company hierarchy.51 The way I read it, professionalizing programming was a way of standardizing this new group of workers. To make them more predictable and cost-efficient and to integrate them fully into the established system without endangering the existing corporate structures.

48 Ensmenger, Nathan. The Computer Boys Take over: Computers, Programmers, and the Politics of Technical Expertise. History of Computing. Cambridge (Mass.): MIT Press, 2010, 239.

49 Ensmenger, Nathan. The Computer Boys Take over: Computers, Programmers, and the Politics of Technical Expertise. History of Computing. Cambridge (Mass.): MIT Press, 2010, 239.

50 Ensmenger, Nathan. The Computer Boys Take over: Computers, Programmers, and the Politics of Technical Expertise. History of Computing. Cambridge (Mass.): MIT Press, 2010, 236-40.

51 Ensmenger, Nathan. The Computer Boys Take over: Computers, Programmers, and the Politics of Technical Expertise. History of Computing. Cambridge (Mass.): MIT Press, 2010, 22.

Just rereading and rewriting these passages from Chandras and Marinos texts makes me so mad. And it’s not only coding; it’s everywhere. Okay, I think I’m going for a short walk to calm down a bit. It’s so infuriating.

One way companies tried to select the people for the job of programmer was by testing programs for new employees. These programs often included pseudoscientific aptitude tests and personality profiles. They included a focus on mathematic skills or a formal education in mathematics, which was even disproven when these tests first appeared and favored antisocial men with an interest in mathematics—the detached male programmer.52 The goal of those testing programs for companies was to transition from dependence on the few highly skilled programmers to a larger, less skilled workforce, where each programmer is interchangeable.53 With this systematic preselection for a particular type of man to be a programmer, the stereotype of the antisocial, male, mathematics-inclined programmer came to be in the 1960s and was embraced by men as a self-ascription of the profession.54 With the increasing masculinization of the field, behavior patterns took root, whose impact I can still feel today. The idea of real programmers who work on real computers with genuine programming languages. They understand the computer and think in machine code. They don’t need easy programming languages because they are just that good with code. No one else can probably understand their code because it’s just too clever and exploits the hardware on a fundamental level.55 The rudeness and antisocial behavior are what makes them a good programmer—no bullshit, just objectively superior code. And in comparison to men, who are born programmers with their logical thinking and problem-solving nature, women just can’t compete because they are out of their element.56 Glory to the male genius!

52 Ensmenger, Nathan. The Computer Boys Take over: Computers, Programmers, and the Politics of Technical Expertise. History of Computing. Cambridge (Mass.): MIT Press, 2010, 77-9.

53 Ensmenger, Nathan. The Computer Boys Take over: Computers, Programmers, and the Politics of Technical Expertise. History of Computing. Cambridge (Mass.): MIT Press, 2010, 81.

54 Ensmenger, Nathan. The Computer Boys Take over: Computers, Programmers, and the Politics of Technical Expertise. History of Computing. Cambridge (Mass.): MIT Press, 2010, 239-4.

55 Chandra, Vikram. Geek Sublime: The Beauty of Code, the Code of Beauty. Minneapolis, Minnesota: Graywolf Press, 2014, 45-6.

56 Chandra, Vikram. Geek Sublime: The Beauty of Code, the Code of Beauty. Minneapolis, Minnesota: Graywolf Press, 2014, 58-9.

Marino sees this mix of machoism, sexism, and chauvinism in coding as forms of encoded chauvinism. He defines encoded chauvinism as the following:

(...) encoded chauvinism, the name I give denigrating expressions of superiority in matters concerning programming, which I see as a foundational element of the toxic climate in programming culture, a climate which often proves hostile—particularly to women and other minority groups—and is a kind of technological imperialism.57

57 Marino, Mark C. Critical Code Studies. Software Studies. Cambridge, Massachusetts: The MIT Press, 2020, 134.

This hierarchical view based on an arbitrary judgment of what is “real” or “good” or “right” code can already be seen in the concept of higher- and lower-level programming languages as it implies an order with the machine at the bottom and the human and their language at the top.58 It’s represented in the age-old discussions of which programming language is the “best” and only real programmers using specific languages,59 the formation of certain programmer archetypes60, and superiority complexes in the open-source community61—even though this could in theory be a place with good vibes.

All of this builds a narrative about who can code and who should have access to it. Returning to the first comment in this text, where I talked about my struggle to decide on a language to write this text in, the language of code is another substantial factor in the imperialistic—and to a certain degree colonizing—aspects engrained into coding. There is no real alternative to the English language.

58 Marino, Mark C. Critical Code Studies. Software Studies. Cambridge, Massachusetts: The MIT Press, 2020, 149.

59 Marino, Mark C. Critical Code Studies. Software Studies. Cambridge, Massachusetts: The MIT Press, 2020, 134.

60 Chandra, Vikram. Geek Sublime: The Beauty of Code, the Code of Beauty. Minneapolis, Minnesota: Graywolf Press, 2014, 42-5.

61 Chandra, Vikram. Geek Sublime: The Beauty of Code, the Code of Beauty. Minneapolis, Minnesota: Graywolf Press, 2014, 54.

There is also this lighthearted thread about non-English devs talking about switching from their native language to English for coding

Until I got into coding, I was still pretty naive about this, if I’m honest. I remember one of my first thoughts about this was: How do people who don’t speak English or use a keyboard with a Latin alphabet actually code? Yeah, you will probably just use your specific character set, and it will work somehow. This idea of universal language and character support was shattered as I realized how people in our class would switch their keyboard from their native layout, like Hangul, to English. I realized quickly that there is basically only English in the commercialized world of coding. There are, in comparison to the totality of all programming languages, a few that are non-English. Still, many of them are either not really compatible, proof of concepts, or for educational purposes.62 Besides these very few non-English programming languages, the only other exceptions are esoteric languages. They are more akin to artistic endeavors probing the boundaries of what a programming language can be.63 The programming languages I and many others use to make websites are only available in English.64 This bias is also prevalent in the input devices we use. Even my German qwertz layout is not really that nice for coding. Many of the characters I have to use are hidden behind different combinations of control, shift, and option, whereas they are way more accessible on an English qwerty layout. Plenty of people I know switched permanently to the English layout because of coding.

62 Gretchen McCulloch. “Coding Is for Everyone—as Long as You Speak English.” Wired (blog), August 4, 2019. https://web.archive.org/web/20190409052734/https://www.wired.com/story/coding-is-for-everyoneas-long-as-you-speak-english/.

63 Mateas, Michael, and Nick Montfort. “A Box, Darkly: Obfuscation, Weird Languages, and Code Aesthetics.” In Proceedings of the 6th Digital Arts and Culture Conference, 144–53. IT University of Copenhagen, 2005, 148.

64 HTML, CSS, JavaScript and php.

As Marino puts it, the choice to base programming languages on one specific natural language has significant implications for everyone interacting with it. It can have colonizing effects on non-native speakers, and every programmer who is not a native speaker is at a disadvantage in the computational economy, as they don’t have the same connection to the different tokes or syntax adopted from the natural language.65 code and programming are not neutral or unbiased—they are an interwoven mess of the social conditions they emerged from, their integration into the capitalist system and patriarchal structures, characterized by displacement, exploitation, demarcation, and the continuous development in these conditions.66 They were and still are, to a certain degree, tools of Western imperialism and US soft power. Nevertheless, they are also tools for expression that can be subverted, rethought, and critically engaged with. Practices like code work, code poetics, electronic literature, or new artistically inclined programming languages broaden our understanding of what code can be and who a programmer is. Similarly, the field of software studies is critically engaging with what code was, is, and can be.

65 Marino, Mark C. Critical Code Studies. Software Studies. Cambridge, Massachusetts: The MIT Press, 2020, 131-2.

66 For example, in Donna Haraway‘s analysis of the connections between technology, capitalism, and patriarchy or Silvia Federici‘s analysis of the emergence of capitalism from a feminist-Marxist perspective.

Code on the Web

The 1980s were also when HTML came to be, but this is another story Miriam Humm already told.

Finding out more about View Source was a fascinating rabbit hole. I spent half of the four hours trying to write the correct query and browsing random forum posts until I finally found this Quora post from Glen Murphy, which had somewhat of an answer. The basis of the quoted Blogpost by Berners-Lee testing different Browsers in 1992 I think ViolaWWW was the first browser to support this in 1992. Sadly, I couldn’t find this feature mentioned in the documentation I found for ViolaWWW. Another contender could be Netscape in 1994. This release history for the browser lists version 2.0 alpha 3, April 6th, 1994, with the feature view document source code. The book Coders: The Making of a New Tribe and the Remaking of the World by Clive Thompson attributes the view source feature to Netscape.But it’s not stated if it was the first browser.

With the web, I see history repeating itself to a certain degree. The technical foundation for the Internet was built as a US military project for distributed communication in the 1960s called ARPANET. First, in the US, later in Europe and other countries, similar networks were established between universities for research purposes.67 The World Wide Web, or the Internet as we know it today, got its start in 1989 at CERN68; yet again, it is an overwhelmingly male story.69 From the original proposal, the web didn’t take long to take off.70 Cyberspace promised to be full of new possibilities—potentially for everyone. The utopian cyberfeminist visions of completely redefining yourself and leaving markers like race, gender, and class behind stayed a vision71. Yet again, the web was positioned as part of masculine culture through its closeness to technology and computer science. However, positions such as cyberfeminism have been criticizing and working against these circumstances since the beginning72, as have art movements such as net-art73 and electronic literature74, which also overlap with cyberfeminism to some extent and also break open what the web, websites, and web-code can be. Parallel to this early period, characterized by curiosity and hobbyism, the web was beginning to be monetized and was fully integrated into capitalism by the 2000s and the bursting of the dot-com bubble. In its current form, the Internet largely exists as an entry point for walled gardens run by big tech fighting for our precious attention and data, as well as the technical backbone of the platform economy at large to slowly commoditize every aspect of our existence.75 Somehow, I still love the Internet, at least when I look at the communities existing in resistance to those forces building a web that is not following the path of monetization, like the poetic web for instance.

But what about the comment? On the web, you can see the source code of a website quite easily compared to other software, like the browser itself. The inspector in modern browsers and the view source function display a website’s HTML code and provide access to the CSS and JavaScript files a website uses. With this, I can look through the web code and all the comments left behind. With so much of the code behind software hidden or inaccessible, I wondered why every modern browser would happily show me everything it could. This can be traced back to the early 1990s and early browser development when developers realized that this could be a fun feature to let people see the website’s code as it was sent to the browser either way to render the website. With view source, every website became an opportunity to experiment and learn from—web code became accessible and easily sharable.76 In the mid-2000s, the modern integrated web inspector was created. There were different tools with similar functions as stand-alone applications available; building on that, a Firefox plug-in Firebug and the Web Inspector for Safari unified all of the different aspects in 2006 into tools that worked inside the browser. Firebug focused more on debugging JavaScript, while Web Inspector, on the other hand, prioritized viewing source features more. Over time, all browsers adopted the inspector with more or less the same features. 77

67 LIVINGINTERNET. “Internet History -- One Page Summary.” Accessed April 7, 2025. https://www.livinginternet.com/i/ii_summary.htm.

68 Cern. “A Short History of the Web.” Accessed April 7, 2025. https://home.cern/science/computing/birth-web/short-history-web.

69 Becky Robinson. “Women Invented the Internet, Too,” Medium (blog), May 13, 2019. https://medium.com/@rshrobinson/women-invented-the-internet-too-4d3a2fef14ff.

70 World Wide Web Consortium. “A Little History of the World Wide Web.” Accessed April 7, 2025. https://www.w3.org/History.html.

71 Hanna, Barbara E., and Juliana De Nooy. Learning Language and Culture via Public Internet Discussion Forums. London: Palgrave Macmillan UK, 2009. https://doi.org/10.1057/9780230235823, 21; and Wajcman, Judy. “TechnoCapitalism Meets TechnoFeminism: Women and Technology in a Wireless World.” Labour & Industry: A Journal of the Social and Economic Relations of Work 16, no. 3 (April 2006): 7–20. https://doi.org/10.1080/10301763.2006.10669327, 12.

72 Consalvo, M. (2003). Cyberfeminism. In Encyclopedia of new media (pp. 108-109). SAGE Publications, Inc., https://doi.org/10.4135/9781412950657

73 Rhizome. “Net Art Anthology.” Accessed April 7, 2025. https://anthology.rhizome.org/.

74 O’Sullivan, James, and Dene Grigar. “The Origins of Electronic Literature as Net/Web Art.” The SAGE Handbook of Web History, 2019. https://api.semanticscholar.org/CorpusID:181858184, 433-5.

75 I‘m positioning my understanding of the current web in line with the theoretical frameworks of platform capitalism, surveillance capitalism, and digital colonialism.

76 Thompson, Clive. Coders: The Making of a New Tribe and the Remaking of the World. Erscheinungsort nicht ermittelbar: Penguin Publishing Group, 2019, 48-9.

77 Jay. “Checking ‘Under the Hood’ of Code.” The History of the Web (blog), May 21, 2021. https://thehistoryoftheweb.com/checking-under-the-hood-of-code/.

I want to elaborate more on my position on frameworks, especially template builders. With frameworks targeted at web developers, I think it’s great that there are tools that make web development easier or, in many cases, web app development. At the same time, I’m thinking about the reasons for streamlining these processes, and I believe that, in many cases, it can be summarized as optimizing because of market pressure. If the trade is sacrificing the understandability and accessibility of code and, by proxy, reducing the potential agency of people interacting with a given website for economic benefit, I really think it’s a bad trade. A trade all of us have to make, but nevertheless, a trade we are somewhat forced to make due to our economic circumstances or market pressure in general.

Edit: I could not find a good spot to discuss code minification, so it’s also here. Similar to frameworks, code minification is also a way to optimize web code. It compresses the code by removing all unnecessary characters. This loads the JavaScript faster but makes the code unreadable unless it is somehow reworked. Even in that case, all original variable names, comments, etc., are wholly lost. When thinking about optimization in this case, I wonder for whom something is optimized? Certainly not you and me.

I think the rise of the inspector and JavaScript debugging tools coincides with the rise of the platforms78 and web2.0 in general, with many websites focussing on user-generated content and the websites growing ever more complex. For a big part of the web, you can still peak at what is happening, but you don’t get the full picture anymore. As Burgess explains with php. The browser still displays the HTML as the end result, but nobody can see the code for everything that happens on the server, like fetching specific data from a database to build the HTML.79 With ever more users on websites like Facebook, Twitter, or Youtube, a growing tech stack to manage and optimize these platforms, and progressing monetization, these websites transform into closed-down applications running on the web. This is also the time of a paradigm shift for the web at large. It moves away from an ethos of sharing code to build your own web to one of proprietary code only seen because of the technological remnants of early web history. For me, the rise of JavaScript frameworks like React or Vue since the 2010s,80 as well as the increase of template builder websites,81 are symptomatic of this shift, as they streamline the process of making websites and web apps but sacrifice the accessibility and understandability of the code while visiting the website.

For me, apps of websites on the phone and mobile browsers represent the endpoint of this obfuscation process. Apps transform the original websites into fully closed pieces of software, removing them entirely from any ethos of interacting with the code, while mobile browsers take away all options for viewing source code and interacting with code in general. Reading about trend forecasts predicting that the majority of people connecting to the Internet already experience it exclusively via their mobile devices,82 I really wonder if the idea of making a website something that is not mediated through a service is already a thing of the past which only a few lost souls, i.e., web designers cling on—so me and maybe you.


This leaves me in a somewhat awkward spot.
For whom am I making websites, and why?
Am I a service as well?

I know why I make websites—it’s fun, and I really enjoy every aspect of making them. The whole process is very personal, which is important to acknowledge. Most personal or professional websites I have made have been designed and programmed by me alone. The few times I collaborated with others, the process was either split between design and programming, or it was close to two people designing and coding on one device. I like to work on things where the project scope stays somewhat small and, to a certain degree, personal. I like collaborating with others, but I don’t enjoy being part of a bigger system where I can only do specific tasks regarding design or programming. I want to have a bond with the things I make and spend so much time with.

78 MySpace 2003, Facebook 2004, Youtube 2005, Twitter 2006, Tumblr 2007, Instagram 2010

79 Dilger, Bradley J., and Jeff Rice, eds. From A to <A>: Keywords of Markup. Minneapolis: University of Minnesota Press, 2010, 184-5.

80 Mattias Tornqvist. “The Evolution of JavaScript Frameworks: From jQuery to React and Beyond” Medium (blog), June 20, 2023. https://medium.com/@mattias.trnqvist/the-evolution-of-javascript-frameworks-from-jquery-to-react-and-beyond-f94b34e7dae8.

81 John Hughes. “A Brief History of Website Builders,” December 2, 2023. https://wpshout.com/history-of-website-builders/#gref.

Website template builders, which are used by regular people, designers, and developers, are good things by themselves. The ability of tools to assist people who are not able to code when making websites is wonderful. I think that creating websites and web-based art is not inherently connected to the ability to code, and websites or web-based art is not in any way worse than if it had been coded. At the same time, I think the ability to code opens other avenues to understand the medium of websites, which may be specific to code and are worth exploring. My main issue with the current system and specifically the business model of template builders is the shift from tool to closed ecosystem service. If I can use a website builder as a tool, that‘s great! If it locks me into a system with limited options and a monthly subscription fee, where I cannot leave without losing my website—not great. On top of that, the generated HTML code has the same issue as the code generated by frameworks, as the potential agency is turned into higher profit margins.

82 GSMA. “From ‘Mobile Only’ Internet to Content Strategies: New GSMA Study Identifies the Megatrends’ Shaping Mobile Industry,” September 11, 2018. https://www.gsma.com/newsroom/press-release/from-mobile-only-internet-to-content-strategies-new-gsma-study-identifies-the-megatrends-shaping-mobile-industry/.

This is the first time I’ve experienced this with a medium. As I did a lot of 3D and regular graphic design before making websites, I was always surrounded by digital files. But so much was unseen, with a disconnect between the final object and the process of making it. Having 100 GB of files as the process was reduced to a single JPEG object somehow always felt weird.

For me, this applies to both personal and professional projects. When being compensated for making a website, I’d rather like the website to be a record of a collaboration, a bond, than a service I provide. I want to move away from the notion of the clean, effortless final product that appears out of thin air to be consumed. I want my websites to be imbued with their history—to be both a completed object and the process itself. I realized that this dual nature is why I like the medium of the website so much. Be it on the Internet or even as a local HTML file. For me, this duality of object and process translates to the split between the website that exists due to the code being performed and the code that is performed for the website to exist. Switching between these two very different layers and experiences opens up somewhat of a place for me. I can experience the same thing in two different Modi, with potentially very different types of material, without them interfering negatively. If I’m interested in taking a peek behind the curtain, I’m allowed to—I love that curiosity is rewarded.

I want to set up a place for people who engage with the things I make.

I want to set up a place for people who are interested in websites.

I want to set up a place for people who are curious.

I want to set up a place where my past self would have felt welcome.

I want to set up a place where I treat you, me, and web code with care.

There are places like this on the Internet, and I want to contribute to them.