The Usefulness of AI in Academic Research: University Faculty Perceptions and Profiles

Lorena Pérez-Penup

Universidad Don Bosco

Soyapango, El Salvador

lorena.perez@udb.edu.sv

https://orcid.org/0000-0002-4061-6091

_______________________________________

Irvin Romero

Investigador Independiente

San Salvador, El Salvador

iromero.docencia@gmail.com (Correspondencia)

https://orcid.org/0000-0003-4926-3445

¿Cómo citar este artículo?

Pérez-Penup, L., & Romero, I. (2025). The Usefulness of AI in Academic Research: University Faculty Perceptions and Profiles. Revista Educación, 49(2). http://doi.org/10.15517/revedu.v49i2.1092

Esta obra se encuentra protegida por la licencia Creativa Atribución-NoComercial-CompartirIgual 4.0 Internacional

Revista Educación, 2025, 49(2), julio-diciembre

Utilidad de la IA en la investigación académica: percepciones y perfiles del profesorado universitario

Artículo científico de investigación

ISSN: 0379-7082 / e-ISSN 2215-2644

Recepción: 10 de febrero de 2025

Aceptado: 26 de mayo de 2025

ABSTRACT

Although the use of AI applications in academic research has the potential to significantly boost scientific productivity, successful integration of AI requires a thorough understanding of how this technological wave has impacted university faculty. This study aimed to contribute to that understanding by examining faculty perceptions regarding the usefulness of AI in academic research. The study used a quantitative, exploratory-descriptive methodology with a non-experimental, cross-sectional design. Data was collected through a survey addressing 25 research-related tasks grouped into five categories: Literature Review, Methodology Selection, Reflection on Results, Communicative Activities, and Enhanced Disciplinary Understanding. Responses from 244 faculty members revealed three distinct perspectives: enthusiasts, selective users, and skeptics. The results demonstrate that that AI is perceived as particularly beneficial for activities related to synthesizing information and consolidating knowledge about the discipline. These findings underscore the need for customized AI policies and targeted support to bridge knowledge gaps and promote greater faculty engagement in the use of AI as part of their research activities.

KEYWORDS: Artificial Intelligence, Higher Education, Faculty, Academic Research Projects.

resumen

Si bien se puede presumir el potencial de las aplicaciones de la IA en la investigación académica en cuanto a la mejora de la productividad científica, para lograr una integración efectiva se requiere de un entendimiento amplio sobre cómo el cuerpo docente universitario está viviendo esta oleada tecnológica. El presente estudio buscó contribuir a este entendimiento mediante una medición de las percepciones de las personas docentes sobre la utilidad de la IA en las investigaciones académicas. Debido a esto, se utilizó un enfoque cuantitativo, exploratorio-descriptivo, con diseño no experimental y transversal. Los datos se obtuvieron mediante una encuesta que abordó 25 acciones relacionadas con la investigación, distribuidas en cinco fases: Revisión de Literatura, Selección de Metodología, Reflexión sobre Resultados, Actividades Comunicativas y Consolidación del Conocimiento Disciplinar. Las respuestas de 244 docentes permitieron identificar tres perfiles de opiniones: entusiastas, selectivos y escépticos. Además, revelaron que la IA es vista como más útil para tareas de síntesis de información y consolidación de conocimiento disciplinar. Estos hallazgos resaltan la necesidad de políticas de IA adaptadas y de apoyo específico para cerrar las brechas de conocimiento y fomentar mayor participación del cuerpo docente en el uso de esta tecnología en sus tareas científicas.

PALABRAS CLAVE: Inteligencia Artificial, Educación superior, Profesorado, Proyectos de investigación académica.

INTRODUCTION

Recently, the adoption of artificial intelligence (AI) has surged across all sectors of society (Joshi et al., 2020), as AI is capable of imitating human intellectual skills and demonstrating a degree of autonomy to perform a wide range of tasks (Sheikh et al., 2023). Higher education is no exception, with AI influencing various aspects of learning and teaching at the university level. Applications range from administrative tasks, such as admissions decisions and course scheduling, to more academic-focused functions like automated grading, feedback mechanisms, student engagement evaluation, and monitoring of academic integrity (Zawacki-Richter et al., 2019).

AI has also impacted academic research, defined by Brew (2002) as a systematic inquiry designed to generate new knowledge through exploration and discovery, typically conducted by university faculty in addition to their regular teaching duties. While academic research can be independent of teaching, it may also overlap with applied research, where faculty integrate research findings into their teaching practices to enhance learning and improve educational methods (Brew, 2002).

Studies highlight that AI can be effectively used in academic research to conduct data analysis, literature reviews, and generating research ideas. This is especially advantageous during the initial stages of developing research concepts and theoretical frameworks (Burger et al., 2023; Khlaif et al., 2023). These applications are expected to improve efficiency and productivity in academic research (Christou, 2023a). The adoption of AI does present challenges. Concerns regarding the accuracy and reliability of AI-generated information have emerged, emphasizing issues such as biased responses and the potential for errors or misleading data that may compromise the validity and credibility of academic research (Michel-Villarreal et al., 2023).

Despite the potential demonstrated by AI in higher education, there is a scarcity of research analyzing how university faculty incorporate AI into their practice, particularly concerning academic research (United Nations Educational, Scientific and Cultural Organization [UNESCO], 2023). The disparity is more pronounced in developing countries, where access to AI tools and resources is often restricted (Crompton & Burke, 2023). This study examines the perceptions and insights with regard to the usefulness of AI in academic research in order to understand its integration and contribution to the field.

This article includes a literature review that explores the applications, benefits, and challenges of AI in academic research. It outlines the methodology and findings. The article concludes by outlining the implications, shortcomings, and future direction of AI integration into academic research.

Literature review

Studies on the use of artificial intelligence (AI) in academic research have produced both optimistic and cautious perspectives. Velarde (2021) compared opinions of AI researchers with World Intellectual Property Organization statistics, finding an overall optimism about the potential of AI among faculty members due to its ability to automate tasks. Hamilton et al. (2023) reinforced this finding, demonstrating that AI enables faster data processing by automating time-consuming tasks such as data analysis and pattern identification (Chubb et al., 2022; Xu et al., 2021). This efficiency allows researchers to dedicate more time to complex and demanding aspects of their work.

Another notable way AI supports researchers is by facilitating the understanding of theoretical concepts. Giray et al. (2024) conducted a SWOT analysis to evaluate the applicability of AI in academic research, identifying its ability to simplify complex topics as a key strength. This capability facilitates theoretical comprehension (Christou, 2023b), streamlines literature reviews, and enables translations to overcome language barriers, further aiding theoretical exploration. Additionally, other researchers suggest that AI fosters methodological innovation by providing new ways to analyze data and engage with literature (Butson & Sproken-Smith, 2024).

In contrast, some researchers have raised concerns about AI’s limitations. Morgan (2023) noted that tools like ChatGPT can efficiently identify basic themes but emphasized that researchers’ analytical and interpretative skills are essential to ensure research quality. Otherwise, reliance on AI may lead to poor analysis, plagiarism, or misinformation (Giray et al., 2024). Similarly, Dahal (2024) cautioned against ethical implications, including the potential for generating misleading research information.

AI’s lack of critical thinking remains a drawback. Christou (2023b) investigated whether AI could generate qualitative research theories. While AI can contribute to theory development through data analysis and information retrieval, it cannot autonomously generate theory. This demonstrates the value of researchers ‘analytical skills and critical thinking in producing high-quality academic research (Hamilton et al., 2023).

Furthermore, AI lacks emotional and social components, such as collaboration and motivation, which are critical for developing new ideas. Chubb et al. (2022) reported that leading academics advocate for integrating AI into research processes, so long as it does not replace human judgment or peer review. Researchers’ discussions, reflections, and validations play an important role in refining analyses (Hamilton et al., 2023). Consequently, AI is best viewed as a research assistant rather than a replacement for human researchers.

While most studies highlight both the advantages and challenges of using AI in academic research, some studies offer a more nuanced perspective. Mah and Groß (2024) surveyed over 100 academic staff from various German universities and identified four distinct profiles regarding AI perceptions among faculty members. The findings revealed that some faculty members are enthusiastic about AI’s potential, but others remain cautious, recognizing both its benefits and drawbacks. They also found that some faculty members exhibit a balanced approach, while others are reluctant to adopt AI. The authors suggest that AI literacy plays a key role in shaping these perceptions.

Brown et al. (2024) confirmed this finding, noting that factors including gender, age, and socioeconomic status influence perceptions of AI held by students and staff at a British university. The study identified a correlation between frequent use of AI and being male, young, and high-income. Professional background also profoundly influences AI adoption. Oliveira et al. (2024) discovered that doctoral students working fully online use AI tools more readily than those in face-to-face settings who tend to encounter more difficulties in AI integration. Harris (2024) reported that faculty in technology-related disciplines in the United States displayed more openness to AI integration than their arts and humanities counterparts. Furthermore, full-time faculty were more likely to use AI tools in their academic research compared to adjunct faculty.

The majority of research has focused on comparing the analytical capabilities of AI with those of human researchers (Hamilton et al., 2023; Morgan, 2023) or gathering perspectives from AI experts (Velarde, 2021; Chubb et al., 2022). Nevertheless, limited research has investigated the perception of faculty members on the use of AI in academic research. Their insight is essential for understanding the role of AI academic research and ethical standards applicable to both faculty and students who are future scholars. This gap highlights the need for additional research into the viewpoint of faculty members regarding the usefulness of AI in academic research.

METHODOLOGY

This study employed a quantitative research approach with a descriptive scope with a non-experimental, cross-sectional design. The objective was to describe current faculty perceptions about AI in academic research rather than investigate personal experiences or in depth interpretations. A quantitative approach seemed most appropriate since it systematically measures and describes these trends, providing quantifiable insights that mitigate the subjectivity often associated with qualitative data (Hernández-Sampieri & Torres, 2018). This method facilitates application of the results to a larger group of university faculty, which reinforces broader conclusions about how useful people think AI is in academic research contexts.

The study population comprised 616 faculty members at Don Bosco University (Universidad Don Bosco (UDB)) in El Salvador, which included 156 full-time and 460 adjunct faculty members. A simple random sample of 237 participants was derived from this population, achieving a 99% confidence level with a 5% margin of error, thereby ensuring statistical significance and the possibility of extrapolating the results.

Data collection consisted of a survey based on a questionnaire adapted from the R-Comp scale developed by Böttcher and Thiel (2017). This scale evaluates five dimensions of research competence, which were redefined in this study to correspond with standard research phases: (1) Reviewing the State of Research, (2) Methodology, (3) Reflection on Research Findings, (4) Communication, and (5) Subject Knowledge. The R-Comp scale was chosen for interdisciplinary framework delineating five specific research tasks. This structure effectively corresponds with the study objective of analyzing faculty perceptions throughout different research phases. The reliability of the scale has been previously validated with Cronbach’s α values for each sub-dimension between 0.73 and 0.87, which strong internal consistency across various research contexts.

The adaptation process involved a back-translation of the original 32 statements from German to Spanish and subsequently back to German conducted by two bilingual experts. A panel of education experts reviewed the items to ensure content validity and recommended modifications to enhance clarity and relevance. A pilot test was conducted using a subsample of UDB adjunct faculty members. The final result was an instrument of 25 items distributed across the research phases as follows:

● Research Status Review: Items 1-4

● Methodological Framework: Items 5-11

● Analysis of Research Outcomes: Items 12-17

● Communication: Items 18-20

● Subject Knowledge: Items 21-25

Responses were measured on a 5-point Likert scale, ranging from strongly disagree to strongly agree. Sociodemographic data was collected for gender, age, faculty affiliation, contract type, academic degree, and teaching engagement level, publishing, and professional development.

The survey was distributed to all 616 faculty members via an anonymous Google Form to ensure confidentiality and voluntary participation. Data collection continued until the sample size was attained.

ChatGPT-4o (OpenAI, 2024) was employed for data exploration and processing, specifically to conduct cluster profile analysis, categorize participants based on their response patterns and provide a comprehensive overview of faculty member views on the role of AI throughout various research phases. Pearson correlations were calculated using Jamovi 2.3.28 to examine the relationship between positive perceptions of AI during one phase of the research process compared to other phases. Jamovi 2.3.28 was used for descriptive statistics and cross-tabulations to explore relationships among faculty member perceptions of AI utility for various research tasks and their demographic characteristics.

Results

A total of 244 responses were collected. The reliability of the instrument used to evaluate the usefulness of AI in research activities, as measured by Cronbach’s Alpha and McDonald’s Omega values, is high, with values of 0.951, suggesting strong internal consistency. This indicates that the items within the questionnaire are highly correlated and effectively measure the same underlying construct. The identical values for both Cronbach’s Alpha and McDonald’s Omega reinforce the robustness of the reliability assessment, indicating that the instrument is both dependable and consistent in evaluating the perceived usefulness of AI across different research activities.

Table 1 presents correlations among the research phases, showing that faculty members who value AI in one phase tend to perceive it positively in others, but to varying degrees.

Table 1.

Correlations between the five phases of research

State of the Art

Methodology

Reflect on the results

Communication

Subject Knowledge

State of the Art

Methodology

0.685***

Reflect on the results

0.577***

0.726***

Communication

0.443***

0.661***

0.623***

Subject knowledge

0.589***

0.691***

0.643***

0.583***

Note. * p < .05, ** p < .01, *** p < .001

Source: Own elaboration using Jamovi 2.3.28.

The strongest correlation is between the State of the Art and Methodology phases (0.685), suggesting those who find AI useful for literature review also recognize its utility in methodological tasks. In contrast, the lowest correlation is between State of the Art and Communication (0.443), indicating a weak relationship with this phase. The Methodology phase shows strong correlations with Reflection on Results (0.726) and Subject Knowledge consolidation (0.691). These patterns imply that faculty members who appreciate AI for structuring methodologies often find it valuable for analyzing results and consolidating disciplinary knowledge.

Reflection on Results has a positive correlation with Communication (0.623), suggesting that faculty members who see value in AI for reflecting on results also perceive its utility in communicating research. Communication shows moderate to high correlation with Methodology (0.661), indicating that a positive perception of AI in communication aligns with similar perceptions in methodology. Finally, Knowledge Consolidation (IAD) is moderately to highly correlated with the rest of the sections, especially with Methodology (0.691) and Reflection on Results (0.643).

Overall, these correlations suggest that faculty members who find AI beneficial in one phase of research are likely to recognize its value in other phases as well. This pattern may indicate a broader, positive or negative attitude toward AI use in academic research, rather than isolated preferences for specific phases.

Table 2 shows faculty members’ perceptions of the areas where AI can be useful in the research process.

Table 2.

Usefulness of IA to do research activities (n = 244)

Mean

State of the Art

Methodology

Reflect on the results

Communication

Subject Knowledge

Mean

3.46

3.63

3.47

3.18

3.39

3.60

SD

0.619

0.708

0.665

0.835

0.848

0.671

Source: Own elaboration using Jamovi 2.3.28.

The overall mean ratings suggest that faculty members find AI most beneficial for reviewing the state of the art (mean = 3.63, SD = 0.708), emphasizing its importance for literature reviews. Subject Knowledge closely follows (3.60, SD = 0.671), suggesting that AI is valued for enhancing expertise in specialized fields.

In contrast, reflecting on the results received the lowest mean rating (3.18, SD = 0.835), suggesting that faculty members may view AI as less helpful in the analytical and interpretative phases of research. The relatively higher standard deviation for this phase reflects a wider variation in responses, indicating mixed perceptions among faculty members about AI’s role in reflective tasks.

Communication phase (mean = 3.39, SD = 0.848) and methodological phase (mean = 3.47, SD = 0.665) received moderate ratings, highlighting some perceived usefulness of AI for structuring research methodologies and enhancing communication in research. However, the broader SD for communication skills indicates a wider range of perspectives, possibly due to discipline-specific differences in AI applicability.

Table 3 shows the perceived usefulness of AI in conducting research activities according to university faculty members’ academic degrees.

Table 3.

Usefulness of IA according to the University Faculty Members’ academic degree

Degree

Mean

State of the Art

Methodology

Reflect on the results

Communication

Subject Knowledge

Doctor

3.08

3.28

3.02

2.69

3.29

3.29

Master

3.52

3.70

3.53

3.26

3.46

3.67

Engineer

3.45

3.62

3.47

3.29

3.28

3.54

Bachelor

3.43

3.61

3.50

3.09

3.31

3.58

Technician

3.19

3.13

3.16

2.80

3.67

3.40

Source: Own elaboration using Jamovi 2.3.28.

Faculty members with master’s degrees rated AI the highest overall, particularly for Reviewing the State of the Art (3.70) and Subject Knowledge (3.67), suggesting that they find AI beneficial across multiple research phases. In contrast, faculty members with Doctoral degrees reported the lowest mean scores, particularly for reflecting on results skills (2.69) and overall usefulness (3.08), indicating a more critical view of AI’s role in these areas. Interestingly, Technicians rated AI highly for Communication (3.67), highlighting specific strengths in AI’s application despite lower overall perceptions (3.19). Engineers and Bachelors showed similar perceptions, with Engineers rating AI slightly higher for reflecting on results (3.29).

Among faculty, the data from Table 4 reveals diverse perceptions on AI’s usefulness in academic research.

Table 4.

Usefulness of IA per faculty

Faculty

Mean

State of the Art

Methodology

Reflect on the results

Communication

Subject Knowledge

CCHH

3.32

3.49

3.34

3.00

3.27

3.50

CCEE

3.62

3.78

3.59

3.51

3.55

3.68

Engineering

3.50

3.65

3.52

3.25

3.45

3.64

Basic Sciences

3.44

3.38

3.45

3.20

3.56

3.61

Rehabilitation Sciences

3.18

3.38

3.13

2.60

3.50

3.50

Virtual UDB

3.44

3.78

3.48

3.08

3.24

3.59

Aeronautics

3.56

3.56

3.63

3.33

3.67

3.60

Teaches in more than one unit

3.60

3.80

3.62

3.36

3.49

3.73

Source: Own elaboration using Jamovi 2.3.28.

CCEE (Economics Faculty) sees the highest usefulness of AI across research phases, particularly for reviewing the state of the art (3.78) and supporting subject knowledge (3.68). Similarly, Engineering faculty see benefits from AI, especially in areas related to reviewing the state of the art (3.65) and methodological applications (3.52). On the other hand, Rehabilitation Sciences faculty perceive AI as less applicable, especially in reflection on results (2.6). This may be because the field involves qualitative assessments and patient-centered care, which may not directly align with AI-driven processes. Faculties like Virtual UDB and Aeronautics reported broader applications of AI. Virtual UDB faculty highlighted its usefulness in reviewing the state of the art (3.78), while Aeronautics faculty valued it for methodological skills (3.63), likely due to the fields’ structured, data-driven nature.

Next, a cluster analysis is presented in Table 5. ChatGPT-4o (OpenAI, 2024) was employed as an assistant in data exploration and processing, specifically for identifying cluster profiles. The analysis revealed two distinct perception profiles: individuals who view AI as useful across multiple areas and those who consider it useful in only one area. The Elbow Method was applied to determine the optimal number of groups (k) for clustering, such as K-Means, by evaluating cluster fit (inertia) across a range of (k) values. Plotting inertia against (k) typically reveals a point where inertia reduction slows, forming an elbow that signifies diminishing returns on clustering quality. This elbow point balances clustering accuracy and simplicity, making it the ideal (k). In this analysis, the elbow appeared around (k = 3) to (k = 5), so (k = 3) was selected, providing clear, interpretable clusters of faculty members’ AI perceptions without overcomplicating the model.

Table 5.

Means of the cluster profiles

Cluster

State of the Art

Methodology

Reflect on the results

Communication

Subject Knowledge

Enthusiasts with AI

4.19

4.11

3.96

4.08

4.16

Selective Adopters

3.60

3.34

2.98

3.23

3.51

Skeptics with AI

2.74

2.69

2.27

2.58

2.83

Source: Own elaboration using ChatGPT-4o (OpenAI, 2024).

Each cluster demonstrates distinct patterns in the perceived usefulness of AI across different research phases. Faculty members in the enthusiast or high engagement with AI cluster display a strong positive perception of AI, with average scores close to or above 4 across all research phases. This indicates a high perceived utility of AI in all research phases, reflecting acceptance and likely frequent use of AI tools. Faculty members in this group rate all phases similarly high, suggesting they recognize the value of AI in a comprehensive range of research activities.

The selective adopters cluster has moderate scores, typically ranging from 3.0 to 3.6, indicating a neutral-to-positive view of AI’s usefulness in academic research. Faculty members in this group assign higher scores to literature review and knowledge consolidation, reflecting a tendency to value AI more in initial research stages and information synthesis. However, they rate AI’s utility lower in later stages such as results reflection and methodology, suggesting a more selective approach to AI integration.

Faculty members in the skeptics or low engagement with AI cluster report the lowest average scores across all research competencies, with ratings typically around 2.5 to 2.8. This suggests a generally low perception of AI’s usefulness in supporting research tasks. Although they rate all areas lower, there is a slight upward trend in scores for subject knowledge and reviewing the state of the art, indicating a marginally higher acceptance of AI in these early phases of research.

Discussion

The results of this study suggest that faculty members find AI particularly useful for reviewing the state of the art and deepening their subject knowledge. This may be because AI aids faculty members in advancing theoretical understanding by retrieving relevant information and synthesizing academic articles (Christou, 2023b). Similar findings were reported by Giray et al. (2024) and Xu et al. (2021), who noted that AI facilitates researchers’ work by simplifying complex topics and assisting with literature reviews.

On the other hand, this study found that faculty members see AI as least useful for reflecting on research results. Specifically, it examined the social and ethical implications of AI use from two perspectives. First, how AI can help researchers critically analyze the societal impact of their findings. Second, how AI might assist in forming opinions on social and ethical issues within their fields. The responses indicate that faculty members are cautious about using AI for data interpretation. Previous studies provide a plausible explanation, suggesting that AI-generated insights may be inaccurate or biased, requiring researchers to critically reflect on and interpret them. (Dahal, 2024; Hamilton et al., 2023; Morgan, 2023).

Interestingly, faculty members in this study found AI moderately useful in the methodological and communication phases of research. This contrasts with other studies that highlight AI’s potential to innovate research methodologies (Dahal, 2024; Butson & Sproken-Smith, 2024). One possible explanation for this discrepancy is that faculty members may tend to see the usefulness of AI for advanced tasks like literature review rather than for methodological design.

Regarding the perceived usefulness of AI based on academic degrees, this study shows that faculty members with a master’s degree perceive AI as the most useful for research tasks. In contrast, PhD holders seemed to find AI the least useful, particularly in the reflection on results phase. This could be due to the emphasis on critical analysis and interpretation skills developed in doctoral-level research. Technicians also reported low levels of perceived usefulness, possibly because they may not use AI for advanced research tasks or may encounter limitations in applying AI within their specific technical roles. Similar findings were reported by Oliveira et al. (2024), who suggested that academic and professional backgrounds influence the integration of AI into tasks related to researching, as these backgrounds shape the work environments and familiarity with AI tools.

The study also revealed that faculty members in the faculties of economics and engineering perceived AI as the most useful for academic research. This may be because these fields often work with large datasets and complex mathematical models, where AI can provide significant support (Chubb et al., 2022). In contrast, faculty members in rehabilitation science appeared to find AI less useful. This finding aligns with previous studies suggesting that AI adoption varies across faculties, with technological fields showing more openness to AI integration than humanities and arts (Harris, 2024).

Additionally, this research identified three distinct groups of faculty members based on their overall perception of AI’s usefulness for academic research. These groups include enthusiasts (high engagement), selective adopters (moderate engagement) and skeptics (low engagement). This categorization is consistent with the study by Mah and Groß (2024), which identified similar profiles based on faculty members’ perceptions of AI. However, Mah and Groß also found an additional profile, the indifferent or disengaged group, who show little interest in AI. The discrepancy might be due to differences in the research samples, as Mah and Groß (2024) included a wider variety of higher education institutions. It could also be explained by differences in AI exposure or cultural factors influencing AI adoption, since their study was conducted in a European context.

Conclusions

This study adds to the existing literature regarding the role of AI in research by examining faculty members perceptions on the utility of AI in academic research. Based on the findings, AI is perceived as most beneficial for tasks that involve information synthesis and knowledge acquisition, whereas it is deemed less effective for activities that require critical thinking and interpretation. This emphasizes the importance of human expertise in knowledge creation and addresses concerns about AI potentially taking over intellectual roles in academia. Results also indicate that variables including academic degree and faculty affiliation impact faculty member perceptions regarding AI perceptions. This study reveals that perceptions about AI and its usefulness in academic research are not binary, but, rather, fluctuate according to individual engagement levels and technology expertise. Furthermore, universities could improve productivity by training faculty in advanced AI tools according to their academic qualifications or area of expertise and offer foundational courses and guidance to less experienced faculty members.

Acknowledging the variability in faculty member engagement with AI may lead institutions to establish targeted training programs. Introductory sessions may acquaint skeptical users with fundamental AI research skills, whereas advanced workshops could enhance the expertise of enthusiasts by focusing on particular AI research applications. Customizing support in this manner can effectively address AI knowledge gap.

This study provides important insights; however, it has limitations that should be addressed in future research. The sample, while statistically significant, is restricted to faculty members from a single institution (Universidad Don Bosco), potentially impacting the generalizability of the findings. Future research may enhance the sample by incorporating faculty from various universities, thereby facilitating a more comprehensive understanding of AI adoption in diverse institutional contexts.

The quantitative approach offers a general overview of participants’ responses; however, it may inadequately reflect the complexity of faculty members’ personal experiences and nuanced perspectives regarding AI in academic research. Future research may employ a mixed-methods approach, integrate in-depth interviews or focus groups to examine faculty members’ personal experiences, perspectives on the integration of AI tools, and their practical application in research activities.

This study highlights differences in AI engagement based on academic degree and faculty affiliation. However, it does not explore which specific AI tools faculty members find most useful and why. Future research could investigate the AI tools that faculty members consider to be most beneficial and the reasons behind their preferences. Future research should purport to delve into this aspect so that universities and AI developers can have a better understanding of the most effective AI tools for research tasks. It would also be important for universities to understand how AI policies impact their ethical engagement with AI in research, since this could help identify barriers or enablers to AI adoption.

This study examined the application of AI in the interpretation and critical reflection of research data. It remains essential to assess the degree to which researchers are sufficiently equipped to utilize AI ethically in text generation, statistical analysis, and information synthesis.

References

Böttcher, F., & Thiel, F. (2017). Evaluating research-oriented teaching: a new instrument to assess university students’ research competences. Higher Education, 75(1), 91-110. http://doi.org/10.1007/s10734-017-0128-y

Brew, A. (2002). The nature of research: Inquiry in academic contexts. Routledge. https://doi.org/10.4324/9780203461549

Brown, R. D., Sillence, E., & Branley-Bell, D. (2024). AcademAI: Investigating AI Usage, Attitudes, and Literacy in Higher Education and Research. Osf Preprints. https://osf.io/preprints/osf/64ahx

Burger, B., Kanbach, D. K., Kraus, S., Breier, M., & Corvello, V. (2023). On the use of AI-based tools like ChatGPT to support management research. European Journal of Innovation Management, 26(7), 233-241. https://doi.org/10.1108/EJIM-02-2023-0156

Butson, R., & Spronken-Smith, R. (2024). AI and its implications for research in higher education: a critical dialogue. Higher Education Research & Development, 43(3), 563-577. https://doi.org/10.1080/07294360.2023.2280200

Christou, P. (2023a). Ηow to Use Artificial Intelligence (AI) as a Resource, Methodological and Analysis Tool in Qualitative Research? The Qualitative Report, 28(7), 1968-1980. https://doi.org/10.46743/2160-3715/2023.6406

Christou, P. (2023b). The Use of Artificial Intelligence (AI) in Qualitative Research for Theory Development. The Qualitative Report, 28(9), 2739-2755. https://doi.org/10.46743/2160-3715/2023.6536

Chubb, J., Cowling, P., & Reed, D. (2022). Speeding up to keep up: exploring the use of AI in the research process. AI & society, 37(4), 1439-1457. https://doi.org/10.1007/s00146-021-01259-0

Crompton, H., & Burke, D. (2023). Artificial intelligence in higher education: the state of the field. International Journal of Educational Technology in Higher Education, 22(20), 1-22. https://doi.org/10.1186/s41239-023-00392-8

Dahal, N. (2024). How Can Generative AI (GenAI) Enhance or Hinder Qualitative Studies? A Critical Appraisal from South Asia, Nepal. The Qualitative Report, 29(3), 722-733. https://doi.org/10.46743/2160-3715/2024.6637

Giray, L., Jacob, J., & Gumalin, D. L. (2024). Strengths, weaknesses, opportunities, and threats of using ChatGPT in scientific research. International Journal of Technology in Education (IJTE), 7(1), 40-58. https://doi.org/10.46328/ijte.618

Hamilton, L., Elliott, D., Quick, A., Smith, S., & Choplin, V. (2023). Exploring the use of AI in qualitative analysis: A comparative study of guaranteed income data. International Journal of Qualitative Methods, 22, 1-13. https://doi.org/10.1177/16094069231201504

Harris, P. (2024). Faculty perspectives toward artificial intelligence in higher education (Doctoral dissertation, Middle Georgia State University). Middle Georgia State University Repository. https://comp.mga.edu/static/media/doctoralpapers/2024_Harris_0909141741.pdf

Hernández-Sampieri, R. & Torres, C. (2018). Metodología de la investigación: las rutas cuantitativa, cualitativa y mixta. McGraw Hill México.

Joshi, S., Rambola, R. K., & Churi, P. (2020, 24-25 October). Evaluating artificial intelligence in education for next generation [Conference session]. II International Conference on Smart and Intelligent Learning for Information Optimization (CONSILIO), Goa, India. https://doi.org/10.1088/1742-6596/1714/1/012039

Khlaif, Z. N., Mousa, A., Hattab, M. K., Itmazi, J., Hassan, A. A., Sanmugam, M., & Ayyoub, A. (2023). The potential and concerns of using AI in scientific research: ChatGPT performance evaluation. JMIR Medical Education, 9(e47049), 1-16. https://doi.org/10.2196/47049

Mah, D. K., & Groß, N. (2024). Artificial intelligence in higher education: exploring faculty use, self-efficacy, distinct profiles, and professional development needs. International Journal of Educational Technology in Higher Education, 21(58), 1-17. https://doi.org/10.1186/s41239-024-00490-1

Michel-Villarreal, R., Vilalta-Perdomo, E., Salinas-Navarro, D. E., Thierry-Aguilera, R., & Gerardou, F. S. (2023). Challenges and opportunities of generative AI for higher education as explained by ChatGPT. Education Sciences, 13(9), 1-18. https://doi.org/10.3390/educsci13090856

Morgan, D. L. (2023). Exploring the Use of Artificial Intelligence for Qualitative Data Analysis: The Case of ChatGPT. International Journal of Qualitative Methods, 22, 1-10. https://doi.org/10.1177/16094069231211248

OpenAI. (2024). ChatGPT-4o [artificial language model]. OpenAI. https://openai.com

Oliveira, J., Murphy, T., Vaughn, G., Elfahim, S., & Carpenter, R. E. (2024). Exploring the Adoption Phenomenon of Artificial Intelligence by Doctoral Students Within Doctoral Education. New Horizons in Adult Education and Human Resource Development, 36(4), 248-262. https://doi.org/10.1177/19394225241287032

Sheikh, H., Prins, C., & Schrijvers, E. (2023). Artificial intelligence: definition and background. In H. Sheikh, C. Prins, & E. Schrijvers (Eds.), Mission AI: The new system technology (pp. 15-41). Springer International Publishing.

Organización de las Naciones Unidas para la Educación, la Ciencia y la Cultura [Unesco]. (2023). ChatGPT and artificial intelligence in higher education: Quick start guide. Unesco. https://unesdoc.unesco.org/ark:/48223/pf0000385146

Velarde, G. (2021, 13-15 December). Artificial intelligence trends and future scenarios: Relations between statistics and opinions [Conference session]. 2021 IEEE Third International Conference on Cognitive Machine Intelligence (CogMI), Georgia, United States. https://doi.org/10.1109/CogMI52975.2021.00017

Xu, Y., Liu, X., Cao, X., Huang, C., Liu, E., Qian, S., Liu, X., Wu, Y., Dong, F., Qiu, C., Qiu, J., Hua, K., Su, W., Wu, J., Xu, H., Han, Y., Fu, C., Yin, Z., Liu, M., … Zhang, J. (2021). Artificial intelligence: A powerful paradigm for scientific research. The Innovation, 2(4), 708-727. https://doi.org/10.1016/j.xinn.2021.100179

Zawacki-Richter, O., Marín, V. I., Bond, M., & Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education–where are the educators? International Journal of Educational Technology in Higher Education, 16(39), 1-27. https://doi.org/10.1186/s41239-019-0171-0