OLOR Logo: "OLOR Online Literacies Open Resource, A GSOLE Publication"

 Stylized green and purple 'G' with "Global Society of Online Literacy Educators" in purple.


ROLE: Research in Online Literacy Education, A GSOLE Publication

Fire Walk with Me or The Metamodern Prometheus

Demythologizing AI Tools through AI Literacy

by Justin Cary



Publication Details

 OLOR Series:  Research in Online Literacy Education
 Author(s):  Justin Cary
 Original Publication Date:  19 December 2025
 Permalink:

 <gsole.org/olor/role/vol4.iss2.f>

Resource Overview

Media, Figures, Tables


Resource Contents

Through the darkness of future past, the magician longs to see, one chants out between two worlds, fire walk with me!

– David Lynch

It was very different, when the masters of the science sought immortality and power; such views, although futile, were grand, but now the scene was changed. The ambition of the inquirer seemed to limit itself to the annihilation of those visions on which my interest in science was chiefly founded. I was required to exchange chimeras of boundless grandeur for realities of little worth.

– Mary Shelley, “Frankenstein, or The Modern Prometheus,” 1818

1. Full Article

[1] November 30, 2022 was an auspicious day. Harry Styles reigned on the Spotify charts with “As It Was,” claiming the top spot with the most streamed song globally. Hakeem Jeffries was elected as the first Black House of Representatives minority leader. The US Congress coalesced to prevent a rail strike. The US men’s soccer team scored a big victory in the World Cup, the world’s largest volcano spewed lava into the air, and OpenAI released for the first time to the public, a little demo of a new piece of software called ChatGPT that it wanted everyone to test out called ChatGPT. Within five days, the chatbot had attracted over one million users.1 The version that launched in 2022, GPT-3, was not the first iteration of ChatGPT. GPT-1 appeared in June 2018 and was part of many new generative pre-trained transformer (GPT) technologies being developed around this time. This technology distinguished itself as new and unique because unlike traditional search and retrieval based technologies such as Google, which rely on complex processes such as “crawling” and “indexing”,2 GPTs instead use Large Language Models (LLMs) to “train” software to be able to predict or guess what words and phrases will most likely be used, needed or come next when a user prompts them. Much like Prometheus bringing the new fire of innovation from Mount Olympus to humanity, this new technology presented a different way of thinking about and using software to accomplish tasks and when GPT-3 and later GPT-4 launched, and with them a simple and intuitive user-interface, the world took notice and ChatGPT set a record as the platform with the fastest-growing user base ever.3 It would seem this particular new technological fire ignited humanity's collective imagination very quickly.

[2] The tale of Prometheus is a myth, the story of a rogue god who wanted to bring a power once owned only by the gods of Olympus to humanity so they could also benefit from this awesome power. In the Prometheus story, this power took the form of fire, a representation of technology, something that must have seemed nothing short of magical to the humans first encountering it. Sam Altman of OpenAI commonly uses the language of magic to create and construct this very myth around ChatGPT. For example, in May 2024 Altman wrote on X, “not gpt-5, not a search engine, but we’ve been hard at work on some new stuff we think people will love! feels like magic to me.”4 Altman, and many others, use this sort of writing, language and rhetoric all the time to talk about AI, to create the “myth” that AI is magic, that it can do anything and everything. So here now, I would like to share a little story of my own. The premise: a Metamodern Prometheus has been unleashed, a myth that began on November 30, 2022 as the world was introduced to a new fire, a new technology. This myth has been growing in size ever since, a magical, mythical tool that can do it all. It can write for you, it can create art for you, it can do all your math, make all your spreadsheets, tell you jokes, give you recipes, summarize your PDFs . . . what can’t it do? Seemingly, ChatGPT and other AI tools have become almost mythological in nature; solidified in the zeitgeist almost overnight as technological marvels—how do they work? How are they able to come up with these answers so quickly and efficiently? It almost seems like magic! But it is not magic.

[3] Discerning the boundaries between science and magic can often be difficult if the science is advanced enough, or the magic trick is clever enough. So what is the answer to the myth of AI? The answer to this myth is Literacy, specifically a new sort of Literacy framework that overlaps with other Literacy models, especially those created around Digital Literacy and Media Literacy. Foundational AI Literacy for writing, in online and offline spaces, will equip writers with the literacy skills, tools and frameworks they need to better understand what Artificial Intelligence actually is, how it works and how to engage with these tools in order to deconstruct the myths that have been created around these applications and platforms. So! Let us attempt to demythologize AI a little bit, to peer into the fire and attempt to explore AI through the lens and framework of our crucial pillars: ethical considerations of AI use, critical concepts of applied AI for writing, collaborating with AI tools to better understand how they work behind the scenes and using AI tools transparently to ensure clear boundaries between human and nonhuman production as we attempt to understand this metamodern Prometheus that has been unleashed into our writing spaces and understand this tool that has so quickly spread into our pedagogy, practice and parlance.

[4] In September of 2023, the Associated Press reported, “Microsoft disclosed that its global water consumption spiked 34 percent from 2021 to 2022 (to nearly 1.7 billion gallons, or more than 2,500 Olympic-sized swimming pools), a sharp increase compared to previous years that outside researchers tie to its AI research. “It’s fair to say the majority of the growth is due to AI,” including “its heavy investment in generative AI and partnership with OpenAI,” said Shaolei Ren, a researcher at the University of California Riverside who has been trying to calculate the environmental impact of AI products such as ChatGPT (apnews.com).5 The first myth that must be confronted, deconstructed and wrestled with when it comes to the now seeming ubiquitous use of generative AI systems nearly a year later in our educational systems is this: AI tools are magic.6 Using AI is very real and while it may “seem” like magic, we must confront the very real impact these tools have. There is an environmentally devastating impact of these systems that must be at the forefront of AI discussions before we even start thinking about prompts, platforms or proliferation of AI tools. As David Derreby from the Yale School of the Environment states in the article “As Use of AI Soars, So Does the Energy and Water it Requires” from February 2024:

AI use is directly responsible for carbon emissions from non-renewable electricity and for the consumption of millions of gallons of fresh water, and it indirectly boosts impacts from building and maintaining the power-hungry equipment on which AI runs. As tech companies seek to embed high-intensity AI into everything from resume-writing to kidney transplant medicine and from choosing dog food to climate modeling, they cite many ways AI could help reduce humanity’s environmental footprint. But legislators, regulators, activists, and international organizations now want to make sure the benefits aren’t outweighed by AI’s mounting hazards” (e360.yale.edu).7

[5] So, Prometheus has brought humanity this new fire, this new technology, but at what cost? When we log in to ChatGPT, to Midjourney or CoPilot we see a clean, spiffy, welcoming user Interface that greets us with “what can I do for you?” message or a “let’s create together!” idea about the power and potential of AI. But behind this glossy facade is a real-life environmental impact that is unseen when the user queries that AI; there is no animation of a water bottle pouring out onto the ground every time someone hits enter to ask ChatGPT a question. So this myth that AI is just a magical piece of software that exists on our phones and in our browsers and has no real world implications is the first place to begin this ethical deconstruction, this mythological investigation of the Metamodern Prometheus that has been unleashed. What is to be done? Conversations with student writers in classrooms is a great place to start. Discuss these ethical concerns with students before the first prompts are written. Ensure students know about the ecological impacts of the tools they are using or being asked to use so they can become informed stewards of this technology and consenting users, making their own decisions about the role they will play in the AI landscape. Provide the rhetorical framework so student writers can understand how these systems impact their environments before any prompts are written or images are generated. By creating this foundational AI literacy through the lens of an ethical framework, students are beginning to do the work of demythologizing the stories and myths that have been propagated around AI that will help them use these tools in more ethical ways. This work must also be done in universities, colleges and departments as more and more faculty are made aware of these tools and more voices are brought into the AI conversation. But ecological concerns are not the only ethical myth that needs unveiling.

[6] The second myth we must deconstruct is the belief that the writing produced by generative AI tools such as ChatGPT, CoPilot and others is somehow neutral, unbiased and free of the human perceptions imbued in all writing. Generative Pre-Trained Transformers (GPTs) are trained on (LLMs). Most companies, like OpenAI, do not divulge information about the dataset used to build and train their LLMs, creating mysterious “black-box” situations. We simply have no idea what writing, texts, books, online data, and human language sets are being used to train ChatGPT to do what it does, to predict what sequence of words, syllables and language patterns it thinks you want to see when you ask it a question. When systems like ChatGPT are trained on LLMs that contain inherent biases, these biases become normal, even factual. Damien P. Williams points to this concern in “Bias Optimizers” published in American Scientist when he writes, “Such prejudices are inherent in the data used to train AI systems. The factual and structural wrongness is then reinforced as the AI tools then issue outputs that are labeled ‘objective’ or ‘just math.’ These systems behave the way that they do because they encode prejudicial and even outright bigoted beliefs about other humans during training and use. When it comes to systems such as ChatGPT, these problems will only increase as they get more powerful and seem more ‘natural.’ Their ability to associate, exacerbate, and iterate on perceived patterns—the foundation of how LLMs work—will continue to increase the bias within them” (Wililams, 207).8 A quick visit to huggingface.co, a community of folks creating their own GPTs trained on a variety of LLMs, will reveal that you can build your own GPT and train it on a predetermined LLM, such as the 71 gigabytes of data contained in Wikipedia. In July 2023, Lian et al.wrote: “GPT detectors are biased against non-native English writers” in Patterns, an open access Cell Press journal, pointing to the fact that “GPT detectors exhibit significant bias against non-native English authors, as demonstrated by their high misclassification of TOEFL essays written by non-native speakers” (Lian et al., 2023). This article de-mythologizes one of the most important ethical myths that needs to be examined with AI tools: generative AI platforms are a representative technology; they are not. The LLMs being used to build many of the largest and most popular GPTs simply cannot account for language diversity and cultural diversity and come up short when it comes to representation of non-English speakers and non-English writers. These systems exclude ideas, cultures, systems, voices and identities not represented in the “majority” of information present in whatever bulk of data trained the LLM. And this makes sense, right? Just look at the internet, or any system of power and privilege that currently exists. If these are the dominant systems in play and these are the systems being used to train the AI, to teach the AI about human language patterns, speech patterns, etc., then of course the AI is going to mimic and mirror those systems. The myth of generative AI that has been constructed presents a “magic” tool that can create amazing writing in seconds; this is the myth that AI Literacy aims to demythologize by equipping writers with critical thinking skills to understand how AI systems actually work, how they are trained, on what data they are trained and how to use them in more ethical ways in order to more fully understand how these systems represent or misrepresent language. So again, before we even get to the prompt input text box, these ethical myths must be deconstructed as the first step to understanding the metamodern Prometheus, to calling this Chimera forth from the shadows and seeing it for what it really is and attempting to understand it more fully for those who wish to use it to teach and for those using it to learn and write. From here, we move into more practical considerations of AI usage and think about critical, transparent and collaborative ways to make AI less mythical and more tangible for students and teachers.

[7] The third myth around generative AI that needs exploration and deconstruction is that generative AI is going to make writing obsolete; this is a myth and in fact, just the opposite is true. generative AI tools, when paired with foundational AI Literacy, will create writers engaging in technologically mediated writing spaces who are able to address complex and cutting edge writing situations in new and exciting ways. One of my go-to generative AI tools is a platform called MidJourney, which is self-described as “an independent research lab exploring new mediums of thought and expanding the imaginative powers of the human species.” This ambitious description speaks to the mission of this AI tool—to tap into imagination. MidJourney is an image-making AI tool; I think of this tool as something I can use to “create images with words.” As someone who was always a mediocre artist and, perhaps, an above average writer, arriving in a age in which I could leverage my writing practice to create images I would never be able to create with my own hands was an exciting prospect when I first discovered MidJourney in 2022. Slowly, I learned the language of the prompt; a very specific lexicon required to get the MidJourney Bot that I installed in my Discord server to do what I wanted. A standard MidJourney prompt looks something like this: “/imagine still frame long shot of a skeleton dressed in old distressed dusty knights templar costume, chainmail, coat-of-arms, red cross, crusader, undead templars, interior crypt background, shallow depth of field, inspired by the film indiana jones, 35mm, cinematic lighting, dramatic lighting Image Prompts, Style Reference –v 6 –ar 16:9 –chaos 25 –stylize 300 –style raw –personalize 76mgsrj” (MidJourney Magazine Issue 17 Vol.6). This specific prompt, these specific words in this specific order, modified with these specific commands and personalized with user 76mgsrj’s specific tastes, resulted in this image, which did not exist until user 76mgsrj wrote it into existence.

Image 1. 9

Image of skeleton dressed as crusader

[8] Writing images with words. That image did not exist until user 76mgsrj wrote it into existence; that user used writing and ideas in a specific way with a specific intentionality to get the image they wanted. The audience of this piece of writing was a machine, an AI. The person writing this understood to whom they were writing; they understood the rhetorical situation here; the audience was a non-human one and so they composed their text with this audience in mind, adding phrases like “inspired by the film indiana jones” (notice the lack of capitalization because computers do not really care about proper nouns) and specific commands like “chaos 25” (which allows the AI to exert a little more chaotic nuance on the output image). If you were to copy this exact prompt and ask MidJourney to make this image again, you would get something similar but different, and herein lies the small hidden truth of how AI really works (remember the GPT of it all). It is not searching for indexed data; it is predicting outcomes. It is a Generative Pre-Trained Transformer. MidJourney is guessing. Most of the time, those guesses are pretty good. And since 2022, it has gotten a lot better at guessing; and it will continue to do so. But it is important that we explore the myth that AI is some all powerful, all knowing tool; it really doesn’t “know” anything. It is just good at making a solid prediction. This crucial piece of AI Literacy is a foundational key to demythologizing AI for writers—the understanding that collaboration with AI is a new process. This process is not like the writing and composing processes that many writers are accustomed to. New AI Literacy skills and habits will have to be learned, taught and explored; habits such as comfortability with unpredictability, such as the output of this image prompt. The writer will have no idea what will appear when the text prompt is sent to MidJourney. In this way, the “process” becomes one of curation and critical decision making about what fits most closely with the vision the writer has for the product. AI Literacy for writers means a redefinition of process and product and this new AI Literacy is necessary to deconstruct the myth that generative AI is the only “magical tool” that is needed to create everything. Which takes us to transparency and collaboration.

[9] The fourth and final myth of generative AI that must be demythologized is the belief that the use of generative AI tools should be hidden somehow or used in secret when in fact transparent and collaborative use, another key pillar in AI Literacy, is a crucial step in shedding some light on what it means to use AI tools for writers. In all technology mediated writing spaces, it is paramount that writers work with AI tools in transparent and collaborative ways. What does transparency mean when talking about AI? In my year-long pilot study using a platform called PowerNotes that incorporated ChatGPT as the AI tool, my team made sure that whenever a writer had an opportunity to interact with the AI, that interaction became part of that writer’s process and that process was folded into the work of creating the text. For example, PowerNotes automatically curated Student-AI interactions and nested those interactions as part of the writing process, collating these interactions as part of the text they were writing. These student-AI writing conversations and points of engagement became embedded in the texts students composed and the processes of AI interaction become visible from process to product. These interactions should never be hidden or veiled but instead brought to the surface and included in the work. Just as with the way tools like MidJourney are making predictions about outputs, it is so important that students develop a keen sense of literacy around what it actually is that the AI is sharing with them when they prompt it and ask for a response; to simply include that response and acknowledge it as a result of a query instead of seeing it as a product or an end result is a major shift in how writers perceive the product of what an AI produces. Stemming from this is the concept of collaboration. In an attempt to demythologize AI further still, we need not see these tools as “magical objects” that can do everything for us but instead as supportive technologies that can do our work with us. In the same way humans have always used assistive and supportive technologies to enhance human thought, productivity and workflow, AI tools can become writing collaborators, artistic idea generators, research assistants, and much more if we begin to unpack some of the myths we have constructed around them and start to see them for what they really are.

[10] Prometheus was chained to a cliff side and an eagle devoured his body for all eternity for his crimes of bringing the “fire” of technology to human beings. Have humans now taken up Prometheus’ noble cause, to bring forth a new fire unto ourselves, one that does not burn and flicker but instead generates and writes and sometimes hallucinates? Perhaps, but we should also remember that the tale of Prometheus is one born from mythology. It is a story crafted to help us understand something unknowable. Artificial Intelligence tools are knowable. AI is not a mythological concept that can only be explained with awe and wonder and stories. Through a comprehensive AI Literacy encompassing frameworks for ethics, critical thinking around AI tools, methodologies for collaborating with AI and using AI in transparent ways, we can start to know AI in a more tangible way and start to see it less as a myth and more for the real fire that it may be—a fire that could burn away all our old notions of what we thought we knew about writing like a natural forest fire clearing the canopy for new growth or like a rogue god of olympus presenting a new path or perhaps something entirely new; a Metamodern Prometheus forged from the composite parts of the past and offering a vision of the future.

2. Endnotes

  1. Marr, Bernard. “A Short History of ChatGPT” Forbes.com. 19 May 2023. Accessed 27 August 2024 https://www.forbes.com/sites/bernardmarr/2023/05/19/a-short-history-of-chatgpt-how-we-got-to-where-we-are-today/
  2. https://developers.google.com/search/docs/fundamentals/how-search-works
  3. Hu, Krystal. “ChatGPT sets record for fastest-growing user base” Reuters. 2 February 2023 Accessed 27 August 2024 https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/
  4. Altman, Sam. @sama https://x.com/sama/status/1788989777452408943 X, 10 May 2024
  5. https://apnews.com/article/chatgpt-gpt4-iowa-ai-water-consumption-microsoft-f551fde98083d17a7e8d904f8be822c4
  6. Altman, Sam. “The Intelligence Age” https://ia.samaltman.com/ 23 September 2024
  7. https://e360.yale.edu/features/artificial-intelligence-climate-energy-emissions
  8. Williams, Damien Patrick. “Bias Optimizers”. American Scientist Special Issue: Scientific Modeling July-August 2023 https://afutureworththinkingabout.com/wp-content/uploads/2024/01/Bias-Optimizers-PREPRINT.pdf
  9. From MidJourney Magazine issue 17, Volume 6 (2024 MidJourney, Inc.)

3. References

Altman, Sam. @sama https://x.com/sama/status/1788989777452408943 X, 10 May 2024

Berreby, David. “As the Use of AI Soars, So Does the Energy and Water It Requires” (2024 February 6). YakeEnvironment360 https://e360.yale.edu/features/artificial-intelligence-climate-energy-emissions

Fingerhut, Hannah, O’Brien, Matt. “Artificial Intelligence Technology Behind ChatGPT was built inIowa-with a lot of Water” (2023 September 9) Associated Press. https://apnews.com/article/chatgpt-gpt4-iowa-ai-water-consumption-microsoft-f551fde9808317a7e8d904f8be822c4

Hu, Krystal. “ChatGPT sets record for fastest-growing user base” Reuters. (2023 February 2) https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/

“In Depth Guide to How Google Search Works” (2024 April 22) Google Search Central https://developers.google.com/search/docs/fundamentals/how-search-works

Islanderart “still frame long shot of a skeleton dressed in old distressed dusty Knights Templar costume, chainmail, coat-of-arms, red cross, Crusader, undead Templars, interior crypt background, shallow depth of field, from the film Indiana Jones, 35mm, cinematic lighting, dramatic lighting chaos 25--ar 16:9--style raw--v 6--stylize 300--personalize 76mgsrj” (2024 June 23) MidJourney https://www.midjourney.com/jobs/f8a5a2a8-da53-40f1-96da-82c2b70f34d7?index=0

Marr, Bernard. “A Short History of ChatGPT: How We Got To Where We Are Today” (2023, May 19) Forbes. https://www.forbes.com/sites/bernardmarr/2023/05/19/a-short-history-of-chatgpt-how-we-go-to-where-we-are-today/

Williams, Damien Patrick. “Bias Optimizers.” American Scientist Special Issue: Scientific Modeling July-August 2023 https://afutureworththinkingabout.com/wp-content/uploads/2024/01/Bias-Optimizers-PRPRINT.pdf

Privacy Policy | Contact Information  | Support Us| Join Us 

 Copyright © Global Society of Online Literacy Educators 2016-2023

Powered by Wild Apricot Membership Software
!webmaster account!