Skip to main content

Is Artificial Intelligence Creative?

By Kevin Gecowets

The widespread use of artificial intelligence (AI) tools gives us quick and easy access to a vast ocean of data and ideas ranging from business intelligence to the visual arts. This has given rise to conversations about how AI is resulting in fundamental changes in education[1] and the workplace,[2] the possibility that AI might surpass human intelligence,[3] and the potential of AI to transform human history.[4] While this all feels new and somewhat frightening, we have been here before.

In the latter half of the 15th Century, Gutenberg’s invention of the movable type printing press made books more easily accessible and affordable. This prompted Benedictine Abbot Johannes Trithemius to critique supplanting the art of creating hand-transcribed vellum manuscripts with the mass printing of paper books. He extolled the virtues of the work of scribes noting that those who copy divine works by hand have them more forcefully imprinted on their minds and are opened to divine mysteries and enlightenment. While he was not against the printed volume, he warned, “…nobody should say or think: What is the sense of bothering with copying by hand when the art of printing has brought to light so many important books; a huge library can be acquired inexpensively. I tell you, the man who says this only tries to conceal his own laziness.”[5] I find myself agreeing in spirit with Trithemius that the loss of manually searching and synthesizing information is a loss of a powerful teaching tool. His critique sounds very much like praise for original research and warnings against AI “cheating” by my educator colleagues.

Printed books supplanting hand-transcribed vellum manuscripts are but one historical precedent of the AI revolution. When my children were in primary and secondary school, educators were scrambling to manage or even ban the use of crowd-sourced reference materials like Wikipedia. During my own primary school days, the use of electronic calculators to solve mathematical equations was considered cheating. Now their use is required in classes and for standardized tests. There have always been paradigm shifts brought on by changes in the technology we use to record, analyze, and transmit information.

Artificial Intelligence is automating the task of searching and recombining data available on the internet. The internet networked and connected data that was stored on dispersed and distributed computers. Computers allowed the digital storage of information previously accessed through printed texts. Printed texts allowed the widespread and affordable distribution of paper texts that were previously hand transcribed on vellum. Vellum provided a writing medium that surpassed papyrus in availability and durability.[6] Papyrus allowed records to be more portable and durable than clay tablets and monumental hieroglyphic carvings. While millennia apart in their widespread use, the primary function of the clay tablet and papyrus is not far removed from accessing a predictive text artificial intelligence program with a smartphone. Papyrus can store and organize searchable data in the same way as AI. The significant difference is the speed, reach, and convenience of the automated tools.

The term artificial intelligence is a misnomer. In their current form, machines are not sentient or conscious.[7] AI predictive analytics algorithms like those that drive ChatGPT and similar AI tools search through huge amounts of data to identify patterns that might predict future trends. Applying the law of large numbers,[8] intellectual products drawn from these huge data sets will, at best, be generic or average. At worst, they reflect systemic bias[9] or include false information[10] as they sweep the depths of the web. This doesn’t prevent AI from providing a convincing facsimile of human thought. AI-generated products can be refined using prompts to generate more exact or unique search results. I conducted an experiment with Generative Pre-trained Transformer 3 (ChatGPT3) and asked it to write a short essay on what Trithemius would say about AI. The language it produced was unnecessarily flowery, and the ideas somewhat repetitive compared to human generated English translations of the Abbot’s writing. However, buried in the text was an insight I believe the Abbot himself would have endorsed. “We must remain ever watchful, lest we become ensnared by our own conjured creations. For even as Artifex Intellectus dazzles with its spectral brilliance, it is the interplay of human hand and soul that illuminates the annals of wisdom for eternity.”[11] 

No matter how convincing the output, the source of all AI programming and prompting is human input and ingenuity. While technology enhances and quickens our ability to manipulate data, the quality and veracity of that data for meaning-making are still dependent on human input and all of its messiness. And this is where the question of creativity comes in.

Human beings are creative because we are not perfect machines. Aside from a few prodigious intellects with a photographic memory, most of us have leaks in our memory and imperfections in our perception that allow our minds to wander and free-associate. Somewhere in our distant evolutionary past, our brains got so large and complex[12] that we started to think in new ways. Thinking in new ways is quintessential creativity.

At the end of the 20th Century, biologist E.O. Wilson observed, “We are drowning in information while starving for wisdom. The world henceforth will be run by synthesizers, people able to put together the right information at the right time, think critically about it, and make important choices wisely.”[13] Putting together information is what AI, or more precisely, predictive analytics excels at. Yet, human beings are required to decide what the right information is, think critically, and make wise choices. Critical reflection, meaning-making, and the intuitive recombining of ideas through divergent and convergent thinking is the human work of creativity. Even as AI programs get more sophisticated, there will always be a need for divergent, abstract, and creative thinking.

AI and machine learning are not supplanting creativity…yet.


[1]

Rabbit Trails and References…

 The views of educators on AI in the classroom from the EdWeek Research Center survey. Langreo, L. (2023, April 14). What educators think about using AI in schools. EdWeek. https://www.edweek.org/technology/what-educators-think-about-using-ai-in-schools/2023/04 

[2] A discussion of AI in the workplace. Noenickx, C. (2023, May 17). Workplace AI: How artificial intelligence will transform the workday. BBC. https://www.bbc.com/worklife/article/20230515-workplace-ai-how-artificial-intelligence-will-transform-the-workday 

[3] The views of experts in artificial intelligence. The Conversation. (2023, April 18). Will AI ever reach human-level intelligence? We asked five experts. https://theconversation.com/will-ai-ever-reach-human-level-intelligence-we-asked-five-experts-202515 

[4] An editorial on the rise of artificial intelligence. Kissinger, H. (2018, June). How the enlightenment ends. The Atlantic. https://www.theatlantic.com/magazine/archive/2018/06/henry-kissinger-ai-could-mean-the-end-of-human-history/559124/ 

[5] Trithemius J. & Behrendt R. (1974). In praise of scribe: de laude scriptorum. Coronado Press. (Original work published in 1492).

[6] Rennick, R. (2022, December 29). The history of vellum and parchment. The New Antiquarian. https://www.abaa.org/blog/post/the-history-of-vellum-and-parchment 

[7] It is unlikely that machine intelligence will achieve consciousness. Koch, C. (2019, December). Will machines ever become conscious? Scientific American. https://www.scientificamerican.com/article/will-machines-ever-become-conscious/ 

[8] I am cheating a bit by applying a quantitative function in qualitative analysis. But the point is that the more random data we collect, the more the samples congregate around the population mean. See Ramachandran, K. & Tsokos, C. (2015). Laws of Large Number. Science Direct. https://www.sciencedirect.com/topics/mathematics/laws-of-large-number 

[9] AI as a tool is not inherently good or evil. It does reflect the cognitive biases and patterns of the authors of the data it searches and sorts. See Getahun, H. (2023, January 16). ChatGPT could be used for good, but like many other AI models, it’s rife with racist and discriminatory bias. Insider. https://www.insider.com/chatgpt-is-like-many-other-ai-models-rife-with-bias-2023-1 

[10] AI does not distinguish between fact and fantasy. CBS News (2023, June 23). Lawyers fined for filing bogus case law created by ChatGPT. https://www.cbsnews.com/news/chatgpt-judge-fines-lawyers-who-used-ai/ 

[11] ChatGPT3 (2023). Trithemius’s views on AI. Generated by https://chat.openai.com/ on August 17, 2023

[12] We are just beginning to understand how humans became creative (or creatives became human), and the science behind it is fascinating. Pringle, H. (2013, March 1). The origin of human creativity was surprisingly complex. Scientific American. https://www.scientificamerican.com/article/the-origin-human-creativity-suprisingly-complex/ 

[13] Wilson, E. (1998). Consilience: The unity of knowledge. Vintage Books. p. 294.

Subscribe to our Mailing List, and get these posts when they are published!

    Click here to opt-in to our mailing list