V MeLCi Lab Autumn School 2025 – AI Research Practice and Media and Communication: Science bootcamp to improve research hands-on skills
Short description
Recent breakthroughs in artificial intelligence (AI), particularly in developing Large Language Models (LLMs) such as GPT-4 or Gemini, are reshaping the research landscape across disciplines. In communication, media studies, and audience research, AI technologies are increasingly used for tasks ranging from literature searches and data annotation to audience segmentation and discourse analysis (Natale, 2021). These tools offer unprecedented speed, scale, and efficiency, but also raise significant methodological and ethical challenges concerning transparency, bias, and the implications of AI-generated knowledge (Ferrara, 2024).
At the same time, the broader media environment is undergoing a profound transformation driven by algorithmic mediation, datafication, and the personalisation of information flows. This context demands that researchers adopt new methodological tools and critically engage with AI’s political, social, and ethical consequences in media ecologies.
Bridging the use of AI in research with critical inquiry into its societal impacts requires an integrative, interdisciplinary approach. Therefore, researchers must develop technical competencies with AI tools while maintaining a reflexive stance towards the broader power structures and biases these technologies can perpetuate. In this light, the 2025 MeLCi Lab Autumn School intends to offer a platform to address these dual imperatives – fostering methodological innovation alongside critical literacy about the role of AI in communication and media studies.
The school will be held in English.
Call for proposals
The MeLCi Lab Autumn School invites applications from PhD students, postdoctoral researchers, and early-career scholars for a four-day intensive online program focused on innovative research methods at the intersection of AI, Communication, and Media Studies.
The School combines practical workshops and keynote lectures, allowing participants to develop hands-on skills with classical and AI-driven methodologies.
In 2025, the school’s AI tracks are specifically designed to meet the needs of media studies and PhD students, post-doctoral researchers, and early-career scholars. Participants will explore case studies and practical examples directly relevant to media analysis, digital journalism, and content curation. The sessions will address unique challenges in media-related research, such as bias in content classification, audience segmentation, and the interpretative complexity of multimedia annotation. Interactive workshops and tailored exercises will enable participants to apply AI tools to media-specific datasets, ensuring immediate applicability and facilitating deeper understanding through experiential learning.
In this sense, contributions for the following tracks (not exclusively) will be considered.
Track 1: AI in Research Practice: Foundations, Methods, and Ethics
- 1. Foundations of current AI tools → Recent natural language processing (NLP) breakthroughs, particularly through large language models (LLMs) such as GPT-4, Claude, and Gemini, have significantly transformed research methodologies across disciplines. The unprecedented accessibility and effectiveness of zero- and few-shot prompting techniques have led to widespread adoption, sometimes even replacing traditional human coders (Gilardi et al., 2023; Grossmann et al., 2023; Ziems et al., 2024). Yet, these powerful tools introduce critical concerns regarding reproducibility, transparency, and ethical use. Prompt stability and variability in LLM responses—affected by minor prompt adjustments—can challenge the replicability and accountability of research (Barrie et al., 2025). This subtrack equips researchers in communication science with essential knowledge of the theoretical foundations of contemporary AI tools, highlighting methodologies and best practices for their ethical and accountable use.
- 2. Accountable Literature Search Using AI Tools → AI-powered tools such as SciSpace and Litmaps have radically improved the efficiency and comprehensiveness of literature searches. However, the convenience of these tools requires heightened researchers’ accountability. This subtrack guides participants through strategies to validate AI-generated results, critically assess literature coverage, and maintain transparent documentation practices, ensuring methodological rigour and reliability in AI-assisted literature reviews.
- 3. AI-Assisted Data Annotation in Research Pipelines → Data annotation is a cornerstone in research pipelines, traditionally relying heavily on human coders. However, AI-based annotation tools are emerging as viable and highly effective alternatives, particularly for large datasets. Barrie et al. (2025) highlight that prompt stability—the consistency of AI-generated annotations across multiple semantically similar prompts—remains a significant challenge. This subtrack introduces participants to AI-driven annotation, focusing on practical approaches to enhancing annotation consistency through frameworks like Prompt Stability Scoring (PSS). Participants will gain hands-on experience in assessing and improving the reliability of AI annotations, integrating responsible AI practices into their research workflows.
Track 2: Communication, Audiences, and Civic Cultures in the Age of AI
- 1. Civic Cultures and Artificial Intelligence → AI can play a crucial role in how citizens engage with the digital world in contemporary times, and a set of opportunities and challenges emerge from it (Sarafis et al., 2025). This subtrack explores the impact of AI-driven platforms and recommendation algorithms on civic engagement, activism, and media literacy.
- 2. Digital Citizenship and Media Literacy in an AI-Mediated World → Leveraging AI and overcoming its challenges requires the development of broad and critical skill sets, the definition of which is still fuzzy (Chiu et al., 2024). This subtrack intends to explore critical media literacy skills in the era of misinformation, deepfakes, and algorithmic personalisation.
- 3. Data Ethics, Equity, and Inclusivity in AI Research → Different biases can emerge from the use of AIs, and the ethical implications of using different tools for knowledge production are still unclear. While AI is frequently represented as either a magical solution or a looming threat, our Autumn School aims to demystify AI, exploring its realistic capabilities, limitations, and responsible use (Ferrara, 2024; Ntoutsi et al., 2020). This subtrack will focus on responsible research practices, equity grants, and inclusive research design for underrepresented communities.
Participants do not require previous experience with AI or data science, as introductory modules will provide a foundational understanding.
The Autumn School will be conducted online and in English.
For inquiries, please contact: melci.lab@ulusofona.pt
Call for proposals deadline
Deadline: 26th September 2025
Notification of Acceptance: 13th October 2025
Registration: 27th October
See details about how to submit a proposal at the bottom of this page.
Format
Online
Dates
11 to 14 November 2025 – V MeLCi Lab Autumn School
TIME (Lisbon time zone)
V MeLCi Lab Autumn School Schedule
TBD
How to apply
Interested participants must send their application (in English) by 26 September 2025, including:
- Updated Curriculum Vitae (máx. 3 pages);
- Candidate’s research statement that includes a description of their doctoral dissertation, research questions and methods (máx. 2 pages);
- Motivation letter describing your current perspective on AI, specific concerns or interests regarding AI’s role in media practices, and your preferred track/subtrack(s) máx. 1-2 pages;
Please send your application as a ZIP file to melci.lab@ulusofona.pt with the subject “Application for the V MeLCi Lab Autumn School”.
Target-group
PhD Students
Early Career Researchers (with a PhD obtained in the last five years)
Fee *
- Lusófona University, CICANT PhD Students 70 euros
- PhD students from other Institutions 100 euros
- Others 150 euros
*The best participant will not pay the fee
Keynote Speakers
TBD
Tutors
TBD
Organisers
The fifth edition of the Autumn School is the result of a partnership between four CICANT research projects:
- ALQI: an intelligent chatbot to support algorithmic literacy (Seed Funding ILIND).
- [GameIN] – Games Inclusion Lab: Participatory Media Creation Processes for Accessibility (https://doi.org/10.54499/2022.07939.PTDC).
- YouNDigital – Youth, News and Digital Citizenship (PTDC/COM-OUT/0243/2021/DOI 10.54499/PTDC/COM-OUT/0243/2021).
See previous editions of the School below:
IV MeLCi Lab Autumn School
III MeLCi Lab Autumn School
II MeLCi Lab Autumn School
I MeLCi Lab Autumn School
________________________________________________________________________________________________
References
Barrie, C., Palaiologou, E., & TÃķrnberg, P. (2024). Prompt stability scoring for text annotation with large language models. arXiv preprint arXiv:2407.02039. https://doi.org/10.48550/arXiv.2407.02039
Chiu, T. K., Ahmad, Z., Ismailov, M., & Sanusi, I. T. (2024). What are artificial intelligence literacy and competency? A comprehensive framework to support them. Computers and Education Open, 6, 100171. https://doi.org/10.1016/j.caeo.2024.100171
Ferrara, E. (2023). Fairness and Bias in Artificial intelligence: A brief survey of sources, impacts, and mitigation strategies. Sci, 6(1), 3. https://doi.org/10.3390/sci6010003
Gilardi, F., Alizadeh, M., & Kubli, M. (2023). ChatGPT outperforms crowd workers for text-annotation tasks. Proceedings of the National Academy of Sciences, 120(30), e2305016120. https://doi.org/10.1073/pnas.2305016120
Grossmann, I., Feinberg, M., Parker, D. C., Christakis, N. A., Tetlock, P. E., & Cunningham, W. A. (2023). AI and the transformation of social science research. Science, 380(6650), 1108-1109. https://doi.org/10.1126/science.adi1778
Natale, S. (2020). Communicating Through or Communicating with: Approaching Artificial Intelligence from a Communication and Media Studies Perspective. Communication Theory, 31(4), 905–910. https://doi.org/10.1093/ct/qtaa022
Ntoutsi, E., Fafalios, P., Gadiraju, U., Iosifidis, V., Nejdl, W., Vidal, M., Ruggieri, S., Turini, F., Papadopoulos, S., Krasanakis, E., Kompatsiaris, I., Kinder‐Kurlanda, K., Wagner, C., Karimi, F., Fernandez, M., Alani, H., Berendt, B., Kruegel, T., Heinze, C., . . . Staab, S. (2020). Bias in data‐driven artificial intelligence systems—An introductory survey. Wiley Interdisciplinary Reviews Data Mining and Knowledge Discovery, 10(3). https://doi.org/10.1002/widm.1356
Sarafis, D., Karamitsios, K., & Kravari, K. (2025). AI and Civic Engagement: A Brief Exploration of Applications and Opportunities. 2025 International Conference on Advancement in Data Science, E-learning and Information System (ICADEIS), 1–6. https://doi.org/10.1109/icadeis65852.2025.10933183
Ziems, C., Held, W., Shaikh, O., Chen, J., Zhang, Z., & Yang, D. (2024). Can large language models transform computational social science? Computational Linguistics, 50(1), 237–291.