| 1 |
Author(s):
Thai Son Chu, Mahfuz Ashraf, Nader Khalifa.
Page No :
|
The Ethical and Pedagogical Implications of Generative AI Tools in Australian University Classrooms
Abstract
The recent adoption of generative AI tools, including ChatGPT and DALL-E, is reshaping the teaching and learning environment in Australian universities at an unprecedented pace, prompting educators and students alike to reconsider traditional classroom practices. This study investigates the ethical and pedagogical implications of generative AI usage among students, faculty, and institutional leaders across diverse academic disciplines. Employing a mixed-methods approach, the research collected quantitative data through structured surveys (n=330) and qualitative insights via interviews and focus groups. Results reveal high usage rates among postgraduate and technology faculty students, alongside significant ethical concerns voiced predominantly by academic staff. Analysis of the collected data revealed a clear link between frequent AI tool usage, higher digital literacy, and improved academic performance; however, a concerning trend was noted in which increased AI use corresponded with lower levels of ethical awareness among participants. The study also identified faculty-specific variations in perceived risk and AI engagement. These findings suggest the urgent need for comprehensive institutional strategies, including AI literacy training and discipline-sensitive ethical guidelines. By bridging the perception gap between students and educators, universities can responsibly harness the potential of generative AI to enhance educational outcomes while preserving academic integrity.
| 2 |
Author(s):
Tondra Rahman, Prottasha Paul.
Page No :
|
Adoption of Artificial Intelligence in Australian Healthcare: A Systematic Review
Abstract
This systematic review investigates the adoption of Artificial Intelligence (AI) in the Australian healthcare system, examining recent literature to identify prevailing trends, applications, challenges, barriers, and enablers. Drawing on 12 peer-reviewed studies published between 2019 and 2025, the review reveals that while AI technologies show promise in diagnostics, patient monitoring, clinical decision support, and administrative functional activities, widespread integration remains in a limited way. Professional attitudes toward AI are mixed, influenced by age, training, and prior exposure, with younger, digitally literate practitioners showing greater openness. Key barriers include a lack of AI literacy, ethical concerning viewpoint, regulatory ambiguity, and the “black-box” nature of AI models, which include trust and accountability. Conversely, enabling factors comprise strong policy support, organisational readiness, co-designed tools, and growing focus on Explainable AI (XAI), which fosters transparency and confidence. The findings underline the necessity of integrating technical innovation with ethical endeavours, regulatory, and educational frameworks to ensure safe, effective, and trusted AI deployment in clinical settings. Future strategies must prioritise clinician engagement, digital infrastructure, and ongoing evaluation to transition from pilot projects towards sustainable system-wide implementation.
| 3 |
Author(s):
Bushra S. P. Singh, Swati Bhatia.
Page No :
|
AI for Sanitation Equity: A Review of Artificial Intelligence Applications in Monitoring and Mitigating Open Defecation in Marginalized Indian Communities
Abstract
This paper reviews the application of artificial intelligence (AI) in addressing open defecation within marginalized Indian communities. Despite large-scale sanitation campaigns, significant disparities persist, especially among Scheduled Castes, Scheduled Tribes, and slum populations. AI technologies—such as satellite image analysis, mobile data collection, IoT-enabled smart toilets, and GIS-based spatial modeling—offer new pathways for real-time monitoring, predictive risk mapping, and behavior change. However, gaps remain in ethical deployment, inclusion of marginalized groups, and integration with social learning frameworks. Through a systematic synthesis of academic studies, government data, and pilot projects, this review evaluates the readiness, limitations, and equity impact of AI interventions. It highlights the need for participatory AI design, stronger data governance, and interdisciplinary collaboration to ensure responsible innovation. By centering community agency, this review outlines a roadmap for equitable AI adoption in India’s sanitation ecosystem, aligning with Sustainable Development Goal 6 on clean water and sanitation for all.
| 4 |
Author(s):
Manzur Ashraf, Kishan Moradiya.
Page No :
|
Machine Learning–Driven Carbon Monoxide Prediction: A Case Study Using the UCI Air Quality Dataset
Abstract
Accurate prediction of air pollution is essential for mitigating its adverse effects on human health, particularly with respect to carbon monoxide (CO) exposure. This paper presents a machine learning–based approach for forecasting CO concentration using the UCI Air Quality dataset, which consists of hourly sensor measurements collected from an urban area in Italy. Multiple regression models—including Linear Regression, Decision Trees, Random Forest, and Gradient Boosted Trees (GBT)—were implemented and systematically evaluated. To capture diurnal variation in pollution levels, a temporal feature (Hour) was extracted from timestamp data and incorporated into the models. All preprocessing, feature engineering, and model development were conducted using the KNIME Analytics Platform. Experimental results demonstrate that GBT augmented with the Hour feature achieved the highest predictive accuracy, with an R² score of 0.921, while Random Forest performed poorly on this dataset. A comparative analysis with prior studies based on Delhi air quality data highlights the dataset-dependent nature of model performance. The findings underscore the importance of rigorous data preprocessing and temporal feature engineering in improving air pollution prediction accuracy.