Citing the Uncitable: Developing Standards for AI and New Media in Scholarly Work
Abstract
Artificial Intelligence and generative models are rapidly reshaping methodologies of the production of academic research and scholarship, but cohesive citation standards are still in progress. Without reliable, transparent and standardized citations accepted by the academic community, AI supported research risks either being falsely or underreported or deemed untrustworthy of acceptance in scholarship. The citation of AI support poses a number of novel issues, the two most foundational being that AI outputs are neither replicable nor verifiable, two core tenets of traditional citation requirements. Outputs from large language models are non-deterministic, ephemeral, and context-sensitive. This necessitates nuanced probes into citation debates on the issues of variable models, training data, hosting, prompting, classification of function, legalities, variance of institutional standards, authorship, licensing, ethics, digital or physical archival responsibility and the rapid pace of development. The goal of a standard practice of citing AI supported research and scholarship is to promote transparency of use and build credibility for its use as a methodological tool. This paper presents a section of work from the CLARIAH-AT-funded initiative to develop citation standards for six types of new media, AI outputs being one of them. Drawing on a series of collaborative discussion-based workshops, teaching opportunities, and written reflections, the project culminated in the proposal of six citation categories covering software packages, data sets, digitized resources, social media posts, ephemeral content and AI outputs. These efforts demonstrate that citation of new media is complex and contested but the act of bringing forth these discussions serves as an ethical roadmap for academia as historical and scholarly methodologies adapt to the age of AI.
Keywords: AI, LLM, Prompts, Citation, Scholarship, New Media, Transparency
1. Introduction
AI has moved from novelty to commonplace in our lives and in many cases, our workflows. Historians, librarians, and researchers use AI tools to process data, analyze sources, and shape narratives. However, we have not yet facilitated a method of citation for the use of these tools and applications. Widely utilized and accepted citation conventions were designed for monographs, collected works and articles. These classical conventions struggle to account for a medium that produces dynamic, personalized outputs as well as the variation of role that the tools play in the production of research and scholarship, be it collaborator, co-author, analyst, or other. Traditional conventions are far out of their depth as citations from a book will not change no matter who may pick it up and an article wouldn’t have the ability to be a coauthor of a paper it’s included in.
Each of the top citation convention organizations have recognized the need for citation conventions for the contributions of AI outputs and put forth “temporary suggestions” for how one may cite AI use under their guidelines. The suggestions provide indications of how each organization views AI, whether it be as a software, an author of irretrievable content, a collaborator, or a container. These suggestions merely provide a basic level of citation guidelines and don’t account for the variation of applications of AI in research and scholarship.
1.1 International Standards
Software or Tool:
APA Guidelines: Author/Developer. (Year). Title (Version) [Description]. URL.
Harvard Guidelines: Author/Organisation, Year. Title, version. [Computer software] (or similar). Available at: URL (Accessed: date).
These guides treat AI as a software or tool through their formatting suggestions and information included. This is indicated through the inclusion of a “developer” option as opposed to only listing an author as well as the inclusion of “versioning” indicating that there is no stable, citable content object but rather, a tool. The URL examples also link to the product pages not to specific citable outputs, indicating that the citation is documenting a method or tool not pointing towards a retrievable publication.
Container:
MLA Guidelines: Author (if available). “Prompt (if relevant).” Name of AI Tool, version (if given), Company, Date of Access, URL (if applicable).
This guide treats AI as a container, similar to an article being in a journal or a page being on a website. The AI system is the container or publication environment and the output exists within it.
Author of Irretrievable Content:
Chicago Guidelines: “Prompt text.” Name of AI Tool, version (if given), Company, date of generation, URL (if applicable).”
This guide treats AI in book-like form due to it being a reference work and through the inclusion of information like a publisher, edition and year of generation. The work is understood to be irretrievable due to the fact that the exact AI output cannot be recovered and this is similar to citing an unpublished manuscript or personal communication.
1.2 Current State of Research
The scholarly work which has tackled AI use in Academia has covered a wide range of subtopics on the issue which directly factor into the decision making process and debates surrounding when and how to cite AI support in scholarship. There are, of course, a number of broad based looks at how generative AI is reshaping the entire scholarly value-chain from knowledge production to dissemination. These articles postulate the appropriate uses for AI support in the generation of scholarship and include knowledge synthesis, development, evaluation and translation (Grimes, 2023). The variation of so-called acceptable applications indicate a need for transparent and appropriate methods of citations. There are also articles on which types of models are appropriate for use and citation in scholarly works. These models are typically smaller and more refined in scope and topics, contain training data from scholarly sources and are often locally hosted (Montague-Hellen, 2024). The variation of models and reliability of output indicate a gulf between the use cases of application, meaning that a closely aligned, smaller model trained on scholarly works would be a more acceptable candidate for direct output citation than a larger model trained on vast data which could be hallucination prone. The topic of the delicate balance between AI authorship and academic integrity has been raised in a number of works. The protection of human creativity and critical thinking is paramount in the preservation of human authorship (Wise, 2024). The role that AI plays in the creation of scholarship also indicates the method of appropriate citation, be it a mention in methodology, a proper in-line citation, or an acknowledgement in a list of works consulted.
2. Materials and Methods
The research referenced in this article is from a range of sources.
2.1 CLARIAH-AT Project
Primarily, this research is surrounding the CLARIAH-AT funded project designed to draft citation guidelines for new media sources in order to promote data reuse and strengthen transparency of the inclusion of new methods in scholarship. The new citation guidelines were framed as a revision of and addition to the current guidelines of the Institute of History at the University of Vienna, specifically section E, Electronic Resources which were last updated in June 2023. This project took the form of two interdisciplinary workshops with about 10 attendees across academic fields and institutions. The workshops were hosted by project co-leads, Emily Genatowski and Dr. Thomas Wallnig and were conducted virtually. Prior to each workshop, materials were circulated to introduce the topics to be discussed. Throughout the workshop, slides were shown to the group to guide discussion on each of the six topics and as the discussions and debates were conducted, notes were taken. The notes were then synthesized and incorporated as proposed new forms of citations which took into account the issues raised throughout the discussion. The new proposed forms were then circulated once more to the group of participants prior to the final workshop. At the final workshop, the participants raised any concluding issues and notes were taken. After the conclusion, the notes were synthesized and any final adjustments were made. The final citation conventions for each of the six categories were then submitted to CLARIAH as the conclusion of the project.
2.2 International Love Data Week: University of Graz
One of the project co-leads, Emily Genatowski, attended International Love Data Week in Graz, Austria to present on the citation of new media initiative. This talk was held at the Library at the University of Graz and outlined the goals of the project co-leads, the methodology of the project, the structure and timelines of the team and engaged with a number of discussions throughout a Q&A segment following the talk. The discussions were noted and brought up once again to the workshop groups throughout the virtual sessions.
2.3 AI in Academia Workshop: University of Vienna
Both project co-leads, Dr. Thomas Wallnig and Emily Genatowski, were involved in a workshop at the University of Vienna titled, AI in Academia: Transparency, Efficiency and Responsibility. This workshop was aimed at graduate students looking to refine and strengthen their use of AI in their work. The pair delivered a joint lecture titled Citations of AI Supported Scholarship which introduced the students in attendance to the CLARIAH funded project, as well as the current widely-acknowledged debates on AI supported scholarship. The students then were led in an anonymized study through a series of questions inquiring about their use of AI in scholarship and what types of crediting and citations they felt were possible. The results were anonymous and displayed in realtime on a projector in front of the group and then the students were asked to discuss the results.
2.4 AI and Large Language Models for Humanities Research: University of Vienna
Project co-Lead, Emily Genatowski, founded and operated a Master’s Level Methodological Workshop at the University of Vienna in 2023 which provided ground work for the concepts of AI supported scholarship. The course was interactive and covered topics integral to the citation debate including transparency, authorship, ethics, prompt techniques, training data analysis, model variability and hosting concerns and more. The discussions in this course were integral to the foundations of the AI prompting discussion sessions in the ensuing CLARIAH project workshop series. This course was later adapted and publicized through DARIAH Campus, a pan-European digital infrastructure for educational materials.
2.5 Emerging Digital Methodologies Conference: Oxford University
Project co-Lead, Emily Genatowki delivered a full paper presentation at the University of Oxford’s Emerging Digital Methodologies Conference on the process of updating, adapting and problemitising the citation methods surrounding AI supported scholarship. The discussions surrounding this presentation were incorporated into the composition of section 4 of this paper.
2.6 Reading Course Digital Humanities - Theory and Concepts in the Digital Humanities: University of Vienna
Project co-Lead, Emily Genatowki guest lectured during this course taught by Project co-Lead, Dr. Thomas Wallnig and covered citation methodologies. The material covered in the session taught by Ms. Genatowski was split equally between traditional citation methodologies and citations of AI supported scholarship.
.
3. Results
The results of the project after the workshop series, lectures and discussions span all six categories of data packages, software packages, data sets, ephemeral media, social media and AI outputs. For the purposes of this paper, the additional five categories findings will be listed in the appendix and the master citation guide and variations of AI citations are placed below. The master citation guide incorporates on a higher level what could and should be included and provides a more flexible framework whereas the seven citation variations within the category of AI supported research are due to the difference in engineering, application and responsibility. The variations occur in how the prompts are engineered e.g. single prompt, multimodal prompt or multi turn refinement, the application of the output in the academic work e.g. citation of output text or as a tool or method, and how much agency the AI support has in the creation of the scholarly work e.g. authorship or co-collaborator. You can find the final examples below.
3.1 Flexible Framework
Footnote Format
Prompt Author. “Prompt: [Full or excerpted prompt text].”
Generated using: [Model name and provider, e.g., OpenAI ChatGPT-4].
Platform: [e.g., ChatGPT, Poe, Perplexity AI].
Date of Generation: [YYYY-MM-DD].
Preserved via: [e.g., Archived transcript, Screenshot, Exported file].
Archive Reference or Link: [URL, archive ID, or filename].
Accessed [Date].
(Optional: Response Excerpt or Summary; Optional: Use Case Context).
Bibliography Format
Prompt Author. “Prompt: [Full prompt or representative excerpt].”
Generated using: [LLM Model Name and Version] by [Provider, e.g., OpenAI].
Platform: [Chat Interface Name, e.g., ChatGPT, Perplexity].
Date Generated: [e.g., 2025-06-03].
Preserved via: [e.g., Screenshot, Archive.org, Local Transcript Export].
Link or Archive ID: [Persistent URL or Filename].
Accessed [Date].
(Optional: “AI Output used in: [e.g., analytical summary, creative generation, etc.]”)
3.2 Variation Guidelines Based on Use
I. Single-Prompt, Single-Output (Public Use)
Footnote format:
Authoring Entity (e.g., OpenAI), Model Name, prompt: “Prompt text,” date generated, platform (e.g., chat.openai.com), accessed [Access Date]. License: [Usage License or Terms].
Bibliography format:
Authoring Entity. Model Name. Prompt: “Prompt text.”
Generated [Date] via [Platform]. Accessed [Access Date].
License: [Usage License or Terms].
II. Multi-Turn Conversation (Threaded Dialogue)
Footnote format:
Authoring Entity (e.g., Anthropic), Model Name, transcript of conversation with [User Name], title or topic (if applicable), date(s) of interaction, platform (e.g., claude.ai), archived as: [Filename or Repository Link], accessed [Access Date]. License: [Terms].
Bibliography format:
Authoring Entity. Model Name. Transcript of conversation with [User Name], “[Conversation Title].”
Conducted [Date(s)] via [Platform]. Archived as: [Filename or Repository Link].
Accessed [Access Date]. License: [Terms].
III. Prompt Used as Research Protocol or Method
Footnote format:
Authoring Entity, Model Name, prompt: “Prompt text,” executed via [Platform or API], date run, archived at: [Stable URL or Archive ID]. Accessed [Date]. License: [Terms].
Bibliography format:
Authoring Entity. Model Name. Prompt used as method: “Prompt text.”
Executed via [Platform or API] on [Date]. Archived at: [URL or ID].
Accessed [Date]. License: [Terms].
IV. Citable Output from Prompt Use
Footnote format:
Authoring Entity, Model Name, prompt: “Prompt text,” date generated. Output cited in: [Scholar Name], “Title of Work,” [Publication or Submission Context]. Accessed [Date]. License: [Terms].
Bibliography format:
Authoring Entity. Model Name. Prompt: “Prompt text.”
Generated [Date]. Quoted in: [Scholar Name], “Title of Work.”
Accessed [Date]. License: [Terms].
V. Prompt-Based Collaboration (e.g., Co-Writing)
Footnote format:
Authoring Entity, Model Name, co-writing session with [User Name], title or description, prompt chain executed [Date], archived as: [Filename or Repository ID]. Final version edited by [User Name]. Accessed [Date]. License: [Terms].
Bibliography format:
Authoring Entity. Model Name. Co-writing session with [User Name]: “[Title or Description].”
Prompt chain executed [Date]. Archived as: [Filename or Repository ID].
Final version edited by [User Name]. Accessed [Date]. License: [Terms].
VI. Prompt Transcript for Teaching or Institutional Submission
Footnote format:
User Name, “Title or Description of Prompt Transcript,” course or project title, institution, date created, generated using [Model Name], submitted or stored at: [Platform or Repository], filename: [File Name]. License: [Terms or Academic Use].
Bibliography format:
User Name. “Title or Description of Prompt Transcript.” Submission for [Course Title],
[Institution]. Created [Date]. Generated using [Model Name].
Stored at: [Platform or Repository], filename: [File Name].
License: [Terms or Academic Use].
VII. Visual/Multimodal Prompts (e.g., DALL·E, Midjourney)
Footnote format:
Authoring Entity, Model Name, prompt: “Prompt text,” date generated, platform (e.g., midjourney.com, chat.openai.com), image or media file: [Filename]. Accessed [Date]. License: [Image Generation Terms].
Bibliography format:
Authoring Entity. Model Name. Prompt: “Prompt text.”
Generated [Date] via [Platform]. Media file: [Filename].
Accessed [Date]. License: [Image Generation Terms].
Table 1.
Category
Use Case
Footnote Example
Single Prompt
Basic query
OpenAI ChatGPT-4. Prompt: “How can I analyze 17th-century OCR errors?” Prompt by Emily Genatowski, 14 May 2025. Archived at [URL].
Multi-Turn
Dialogue
Anthropic Claude 3. Transcript with Emily Genatowski, “AI in Teaching,” 10–12 May 2025. Archived [ID].
Prompt as Method
Protocol
OpenAI ChatGPT-4. Prompt used as method: “Summarize dataset cleaning steps,” API, 20 May 2025. Archived [ID].
Citable Output
Output Cited
OpenAI ChatGPT-4. Prompt: “Generate timeline of AI ethics events,” cited in Genatowski, 2025.
Collaboration
Co-Writing
OpenAI ChatGPT-4. Co-writing session with Emily Genatowski, “Drafting AI Citation Paper,” 15 May 2025. Archived transcript.
Teaching
Coursework
Emily Genatowski. “Prompt Transcript for Digital Humanities Seminar,” Uni Wien, 2025. Generated using ChatGPT-4. Stored [Repository].
Visual
Images/Media
OpenAI DALL·E 3. Prompt: “Illustration of AI citation workflow,” 21 May 2025. Image file [File]. License [Terms].
4. Discussion
4.1 Workshop Series Discussion
The discussion surrounding AI use in the first workshop covered the following topics, which were reflected in the suggested guidelines above.
Prompt transparency was heavily emphasized.
The group opted to add explicit formatting for quoting the exact prompt text, as meeting participants stressed the need for reproducibility and intellectual accountability in AI-supported work.
Archiving and persistence requirements were adopted.
The group opted to include fields for archived transcripts, file names, or persistent URLs , reflecting concerns about the ephemerality and editability of AI-generated content.
Model name and version identification were enshrined.
The group opted to require citation of the specific AI model and version (e.g., ChatGPT-4, Claude 3 Opus) to reflect technological variation and ensure reproducibility of outputs, which may change over time.
Platform or access point clarification was suggested.
The group opted to distinguish between interactive platforms (e.g., chat.openai.com) and API-based use, acknowledging that researchers may interact with AI differently and this affects outputs and rights.
License disclosure was facilitated.
The group opted to add a required License field (e.g., OpenAI Terms of Use), in response to legal and copyright concerns raised during the meeting. This was aiming to help define permissible use of generated content.
A policy of no AI co-authorship was adopted.
The group opted to reaffirm that AI cannot be listed as a co-author. This clarified “co-writing session” as a collaborative tool, with the human user explicitly acknowledged as editor or final author.
A new type of consideration for teaching & institutional contexts was introduced.
The group opted to introduce a new citation type for AI prompt transcripts submitted for coursework or stored in repositories, in response to education-focused feedback about student accountability and transparency.
Visual/Multimodal prompt citations were accounted for.
The group opted to add distinct formatting for image or video generation prompts (e.g., from DALL·E or Midjourney), supporting the emerging practice of AI-generated visual scholarship.
Flexible terminologies for the variation of applications were introduced.
The group opted to incorporate field labels like “used as method” or “quoted in” to accommodate the varied academic uses of AI from analytical pipelines to creative citations.
4.2 AI in Academia Workshop Anonymous Poll Results
The following figures are realtime displays of responses from graduate students in how they would choose to deal with the following ethical quandaries surrounding the citation of AI supported research.
4.3 Discussion Based Topics from Lectures
Function’s Influence on Citation
The application or function an AI system serves within the research process directly shapes how it should be cited. When an AI tool acts as a tool, for example formatting citations, translating or transcribing audio it typically requires acknowledgment but not formal reference or citation. However, as the AI’s role shifts toward a more interpretive or creative role, its function increasingly resembles a human collaborator or co-author which necessitates a transparent citation. If the output is quoted directly, a traditional citation is also necessary. Categorizing the function of the AI use by the scholar prevents the misuse and misattribution and therefore helps maintain the traceability of intellectual contributions across the AI and academic collaboration.
International Citation Standards and Classification
International citation standards are still in development regarding AI supported scholarship. While style guides such as APA, MLA, and Chicago have issued provisional examples, included in the above sections, there is no universal or international agreement on how and when to cite AI’s contributions. Classification systems seem to disagree on whether AI should be treated as software, dataset, or collaborator and this distinction leads to inconsistencies in the included information and indexing. Unification through standards of DOI and ORCID registries could help to promote a standard which will encourage interoperability, verifiability, and trust.
Legalities vs Individual Institutional Standards
The gap between legal frameworks which are helping to define the legal ownership of AI output and university or institutional policies that try to attribute authorship is still wide. Legally, many jurisdictions hold that an AI system cannot maintain authorship rights, which de facto assigns ownership to the academic prompting the system or the owner of the AI system. Yet, the current trend in institutional standards is to prioritize transparency and contribution disclosure over legal or copyright claims. If an academic is following the legal standard they could still be in violation of the institution under which they are working. This highlights a conceptual gap: legal systems that aim to safeguard intellectual property and academic institutions that aim to preserve academic integrity, transparency and trust. To reconcile these will require joining rights-based and responsibility-based models of scholarly credit.
Model Types and Hosting
The model type and hosting environment of an AI tool influence citation practices through implications for the transparency of their training data as well as their relative reproducibility. Closed, proprietary models hosted via API often restrict insight into data provenance and training parameters, complicating verification and long-term archiving. Conversely, open-source or locally hosted models permit fuller disclosure of versioning, fine-tuning datasets, and model weights, aligning better with academic norms of verifiability. Therefore, citation of AI systems must increasingly note not only the model name but also its access mode, version, and hosting conditions to preserve scholarly audit trails.
Persistent Identifiers
Persistent identifiers (PIDs) such as DOIs, Handles, or emerging AI-specific identifiers play a crucial role in ensuring the citability and traceability of AI outputs. Without stable links to the specific model instance, prompt, or dataset used, scholarly references risk obsolescence as models update or are retired. Assigning PIDs to AI models, generated outputs, and even prompt-result pairs would provide an intrepid referential object for scholarly infrastructure. Integrating these identifiers into citation metadata would extend FAIR principles Findable, Accessible, Interoperable, Reusable to generative AI contexts.
Institutional Guidance vs Departmental Practice
Institutional policies on AI citation often provide broad ethical frameworks, but actual practices tend to crystallize at the departmental level, reflecting disciplinary norms. For instance, humanities departments may emphasize interpretive transparency, while computational disciplines prioritize reproducibility. This type of misalignment can leave researchers uncertain about what compliance means in certain cases. Bridging institutional guidance and departmental expectations requires developing dynamic policies that evolve alongside disciplinary conventions, supported by centralized registries of AI-use disclosure templates and examples of model citation for students and authors to reference.
Ethics
The ethics of citing AI in scholarship extends beyond formal attribution to questions of accountability, bias transmission, and epistemic honesty. Ethical citation demands that scholars acknowledge not only that AI was, in fact, used but how it may have influenced reasoning, interpretation, and narrative framing. Transparent disclosure allows peers to assess potential distortions arising from model bias or non-deterministic outputs. As AI becomes embedded in knowledge production, ethical citation will serve as both a moral and methodological safeguard, reinforcing scholarly trust and the integrity of the research record.
5. Conclusions
AI is no longer peripheral to research, but citation standards still continue to lag behind. Without clear norms, AI remains invisible. With them, AI use has the potential to become transparent and accountable. The CLARIAH-AT project demonstrates rigorous, flexible, adaptable and usable standards are possible. Seven templates and an archival suggestion offer a prospective roadmap for scholars and institutions as well as continue the discussion surrounding AI acceptance and usability in the scholarly context. These debates should continue to evolve as the technology evolves but AI use should not be pushed into the shadows as it risks misuse, distrust and malpractice. Clear citation guidelines honor intellectual honesty and facilitate transparency, we should strive to continue to keep up with the latest technology in order to allow academics to innovate responsibly with efficiency. Embracing these temporary formatting suggestions will give historians, librarians, and students the confidence to work openly with AI.
Appendix A
Appendix A.1
Full List of Citation Formats:
Software Packages:
Footnote Format
Developer(s) or Organization. Software Title. Version [Version Number or Tag], release date: [YYYY-MM-DD].
Developed by: [Contributor Roles, if applicable – e.g., "Curated by", "Maintained by", "Lead Engineer"].
Platform or Host: [e.g., GitHub, Zenodo, institutional repository, commercial vendor].
Distributed by: [if distinct from host; optional].
Persistent Identifier or URL: [DOI, Handle, or Stable Link].
License: [Full license name, e.g., MIT, GPL v3, CC-BY 4.0].
Documentation available at: [Manual URL or README, optional].
Archived at: [Archive.org, Perma.cc, or repository ID; optional].
Accessed [Date].
Bibliography Format
Developer(s) or Organization. Software Title. Version [Version Number or Tag], released [YYYY-MM-DD].
Developed by: [Contributor Roles, if applicable].
Platform or Host: [e.g., GitHub, Zenodo, institutional repository].
Distributed by: [Vendor or publisher name, if different].
Persistent Identifier or URL: [DOI or stable access link].
License: [e.g., MIT License, GNU GPL 3.0, CC-BY-NC-SA 4.0].
Documentation: [URL to manual, GitHub Wiki, or README file].
Archived at: [Web archive or local repository ID, if used].
Accessed [Date].
Data Sets:
Footnote Format
Creator(s) or Organization. Dataset Title. Version [Number or Label], release date: [YYYY-MM-DD].
Curated by: [Curator(s), Annotator(s), Schema Designers, or Editorial Team, if applicable].
Hosted by: [Repository or Hosting Platform, e.g., Zenodo, Phaidra, CLARIN, Harvard Dataverse].
Distributed by: [If distinct from host; optional].
Persistent Identifier or URL: [DOI, Handle, or Stable Link].
License: [e.g., CC-BY 4.0, Open Data Commons, Custom Terms].
Documentation: [Optional – URL to README, metadata schema, or data dictionary].
Archived at: [Optional – e.g., Perma.cc, WebCite, university archive].
Accessed [Date].
Bibliography Format
Creator(s) or Organization. Dataset Title. Version [Number or Label], released [YYYY-MM-DD].
Curated by: [Names and roles of contributors, e.g., “Curated by Jane Smith, Annotated by Max Mustermann”].
Hosted by: [Repository Name, e.g., Zenodo, CLARIN, Phaidra].
Distributed by: [If different from host; optional].
Persistent Identifier or URL: [e.g., DOI: 10.1234/zenodo.45678].
License: [e.g., CC-BY 4.0, CC0, or institutional terms].
Documentation: [URL to additional metadata, schema, or usage guide].
Archived at: [e.g., Archive.org snapshot or institutional long-term storage ID].
Accessed [Date].
Digitized Resources:
Footnote Format
Original Creator or Author. Title or Description of Original Work, [Original Date of Creation or Publication].
Held at: [Institution Name], Collection or Archive Name, Shelfmark or Identifier.
Digitized by: [Digitizing Entity or Platform].
Hosted by: [Platform or Repository Name].
Persistent Identifier or URL: [DOI, Handle, or Stable Link].
License: [e.g., Public Domain, CC BY-NC-SA 4.0, or institutional terms].
Documentation or Metadata: [Optional – Link to catalog entry or digital edition].
Accessed [Date].
Bibliography Format
Original Creator or Author. Title or Description of Original Work. [Original Year of Creation or Publication].
Collection or Archive: [Holding Institution, Shelfmark or ID].
Digitized by: [Name of Digitizing Institution or Platform].
Hosted by: [Digital Repository or Access Platform].
Persistent Identifier or URL: [e.g., http://hdl.handle.net/123456/789].
License: [e.g., CC0, Public Domain, or specific repository rights].
Accessed [Date].
(Optional: Documentation or metadata record URL.)
Social Media:
Footnote Format
Author (Real Name if Known or Platform Handle). “Post Content or Short Excerpt.”
Platform: [Platform Name, e.g., X (formerly Twitter), Facebook, Instagram].
Date of Post: [YYYY-MM-DD], Time (optional).
URL or Persistent Link: [Full post URL or web archive link].
Accessed [Date].
(Optional: Screenshot Filename or Archive ID; Optional: License or Usage Terms).
Bibliography Format
Author (Handle or Name). “Post Content or Excerpt.” Platform Name.
Posted on [Full Date], [Time (optional)].
URL: [e.g., https://twitter.com/username/status/1234567890123].
Accessed [Date].
(Optional: Screenshot or Archive Reference; License: [e.g., Standard Platform License]).
Ephemeral Media:
Footnote Format
Creator or Event Host. Title or Description of Content.
Type of Content: [e.g., Instagram Story, Livestream, Temporary Exhibit, Event Page].
Platform or URL: [e.g., YouTube Live, Instagram, Webpage], Published [Date and Time].
Preserved via: [e.g., Screenshot, Archive.org, Perma.cc, Local Capture ID].
Filename or Persistent Link: [e.g., Screenshot_2025-05-20.png, https://perma.cc/...].
Accessed [Date].
(Optional: License or Platform Terms; Optional: Event or Campaign Hashtag; Optional: Approximate Duration or Expiration Date).
Bibliography Format
Creator or Host. Title or Description of Content.
[Type of Media], originally published [Date].
Platform: [e.g., TikTok, Instagram, YouTube Live, Eventbrite].
Preserved via: [e.g., Screenshot, Web Archive, Local Capture].
Filename or Archived Link: [e.g., chat_screenshot_May2025.png, https://perma.cc/ABC1-XYZ].
Accessed [Date].
(Optional: License: [e.g., Standard Platform Terms or CC license]).
AI Prompts:
Footnote Format
Prompt Author. “Prompt: [Full or excerpted prompt text].”
Generated using: [Model name and provider, e.g., OpenAI ChatGPT-4].
Platform: [e.g., ChatGPT, Poe, Perplexity AI].
Date of Generation: [YYYY-MM-DD].
Preserved via: [e.g., Archived transcript, Screenshot, Exported file].
Archive Reference or Link: [URL, archive ID, or filename].
Accessed [Date].
(Optional: Response Excerpt or Summary; Optional: Use Case Context).
Bibliography Format
Prompt Author. “Prompt: [Full prompt or representative excerpt].”
Generated using: [LLM Model Name and Version] by [Provider, e.g., OpenAI].
Platform: [Chat Interface Name, e.g., ChatGPT, Perplexity].
Date Generated: [e.g., 2025-06-03].
Preserved via: [e.g., Screenshot, Archive.org, Local Transcript Export].
Link or Archive ID: [Persistent URL or Filename].
Accessed [Date].
(Optional: “AI Output used in: [e.g., analytical summary, creative generation, etc.]”)
Extended versions for each section along with specified reasonings for final formatting suggestions based on workshop notes can be found here.
References
American Psychological Association. 2023. APA Style Guidelines for Citing AI-Generated Content. Available online: https://apastyle.apa.org/blog/how-to-cite-chatgpt
GO FAIR Initiative. 2017. FAIR Principles. Available online: https://www.go-fair.org/fair-principles/.
Grimes, Seth. 2023. Generative AI and the Scholarly Value Chain: Knowledge Creation, Synthesis, and Translation in the Age of LLMs. Journal of Scholarly Publishing 54: 327–45.
Harvard University Library. 2023. Harvard Referencing Guide: Software, Tools & AI Systems. Available online: https://www.library.harvard.edu/referencing-guides
Modern Language Association. 2023. MLA Handbook: How to Cite Generative AI Output (9th ed.). Available online: https://style.mla.org/citing-generative-ai
Montague-Hellen, Laura. 2024. Domain-Specific AI Models in Scholarship: Reliability, Scope, and Academic Use-Cases. Digital Scholarship Quarterly 12: 44–67.
University of Chicago Press. 2023. Chicago Manual of Style: Citing AI-Generated Text (17th ed. Update). Available online: https://www.chicagomanualofstyle.org/help-tools/AI-citation-guidance.html
Wise, Anna. 2024. AI Authorship, Academic Integrity, and the Future of Scholarly Voice. Ethics in Higher Education Review 8: 201–18.
Wilkinson, Mark D., Michel Dumontier, Ijsbrand Jan Aalbersberg, Gabrielle Appleton, Myles Axton, Arie Baak, Niklas Blomberg, et al. 2016. The FAIR Guiding Principles for Scientific Data Management and Stewardship. Scientific Data 3: 160018.
AI assistance was used in the form of ChatGPT (OpenAI) for drafting, language refinement, and reference structuring. All conceptual arguments, interpretation, and final revisions were completed by the Author.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

