Publications – Angelos Barmpoutis https://abarmpou.github.io/angelos Professor of Digital Arts and Sciences Mon, 17 Mar 2025 16:43:20 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.5 Automated Imaging Differentiation for Parkinsonism https://abarmpou.github.io/angelos/page/automated-imaging-differentiation-for-parkinsonism/ Mon, 17 Mar 2025 12:00:38 +0000 https://abarmpou.github.io/angelos/?post_type=product&p=362 Read More]]> Question Does 3-T magnetic resonance imaging paired with machine learning meet primary end points for differentiating Parkinson disease (PD), multiple system atrophy (MSA) parkinsonian variant, and progressive supranuclear palsy (PSP)?

Findings The multicenter Automated Imaging Differentiation of Parkinsonism cohort study of 249 patients and a retrospective cohort of 396 patients showed excellent discrimination of PD vs atypical parkinsonism, MSA vs PSP, PD vs MSA, and PD vs PSP. AIDP machine learning predicted postmortem neuropathology in 93.8% of autopsy cases.

Meaning Results of this study suggest the use of Automated Imaging Differentiation of Parkinsonism in the diagnostic workup for common neurodegenerative forms of parkinsonism.

]]>
Integrated Telehealth and Extended Reality to Enhance Home Exercise Adherence Following Total Hip and Knee Arthroplasty https://abarmpou.github.io/angelos/page/integrated-telehealth-and-extended-reality-to-enhance-home-exercise-adherence-following-total-hip-and-knee-arthroplasty/ Wed, 19 Feb 2025 18:57:01 +0000 https://abarmpou.github.io/angelos/?post_type=product&p=357 Read More]]> Nearly one million total hip and knee arthroplasties (THA/TKA) are performed annually in the United States, with most patients discharged home and prescribed home exercise programs (HEPs) to enhance lower extremity function. Traditional paper-based HEPs, while accessible and low-cost, often lack engagement and real-time feedback, which are critical for adherence and performance optimization. Extended reality (XR) and telehealth (TH) systems offer promising solutions, combining engagement and feedback, though each has limitations. To address these gaps, we designed and executed a pilot study that compared exercise performance in individuals with THA/TKA using a conventional paper-based HEP versus a proof-of-concept system, dubbed Tele-PhyT, that included the ideal characteristics of a future XR technology that would enable seamless HEP-TH systems, with robust marker-less full body tracking, real-time visual feedback, and performance quantification. The pilot study used a randomized cross-over design and targeted two types of users: therapists and patients. Participants favored Tele-PhyT for its real-time feedback and ease of use, and noted its potential to improve HEP adherence and exercise accuracy.

]]>
Saving lives with coding: the global impact of an undergraduate project https://abarmpou.github.io/angelos/page/saving-lives-with-coding-the-global-impact-of-an-undergraduate-project/ Tue, 19 Nov 2024 18:37:00 +0000 https://abarmpou.github.io/angelos/?post_type=product&p=355 Read More]]> Every smartphone app you use, every video game you play and every website you visit is built on a scaffolding of code that dictates how it works. However, coding is not just about writing computer programmes that work properly; it is also about ensuring that the end users of these programmes find them engaging and easy to use.

User interface and user experience (UI/UX) design is often a central focus of software development courses. However, developing these skills in an educational environment can be tricky as students rarely get the chance to create programmes that will actually be used by real people. To tackle this issue, Professor Angelos Barmpoutis, a computer scientist at the University of Florida (UF), has developed a course-based undergraduate research experience (CURE) in which students develop educational apps that are then used by thousands of school students and educators across the US and further afield.

This article is an open-access resource, produced by Futurum Careers, for K-12 students and offers a glimpse into the process of software development.

]]>
Assessing the Influence of Passive Haptics on User Perception of Physical Properties in Virtual Reality https://abarmpou.github.io/angelos/page/assessing-the-influence-of-passive-haptics-on-user-perception-of-physical-properties-in-virtual-reality/ Wed, 14 Feb 2024 18:49:54 +0000 https://research.dwi.ufl.edu/projects/angelos/?post_type=product&p=298 Read More]]> This paper presents a pilot study that explores the role of low-cost passive haptics on how users perceive physical properties such as the size and weight of objects within virtual reality environments. An A/B-type study was conducted as an air hockey simulation in which
participants experienced two versions: one adhered to conventional VR settings, while the other incorporated a tangible surface, a real table. Statistical analysis of the data collected from post-study questionnaires indicated a shift in perception of size and weight when exposed to the haptic-enhanced simulation, with virtual objects perceived as larger or heavier. It was also noted that the observed shift of the user perception was stronger when the simulation with the tangible surface was experienced first. The paper presents details on the implementation of the air hockey simulation and the setup within the testing environment as well as the statistical analysis performed on the collected data, offering practical recommendations for future applications.

]]>
Investigating how interaction with physical objects within virtual environments affects knowledge acquisition and recall https://abarmpou.github.io/angelos/page/investigating-how-interaction-with-physical-objects-within-virtual-environments-affects-knowledge-acquisition-and-recall/ Tue, 13 Feb 2024 19:04:33 +0000 https://research.dwi.ufl.edu/projects/angelos/?post_type=product&p=308 Read More]]> The use of passive haptics in virtual reality environments has been shown to improve procedural learning across various application domains such as first responders training, kayaking, and others (Calandra at al. 2023, Barmpoutis et al. 2020). In this paper we want to go one step further and quantify the effect of passive haptics on knowledge acquisition and recall. We developed a specialized virtual reality application for learning various chemical compounds and their components. Participants engaged in activities that involved precise mixing and proportioning of chemical components to form targeted compounds (see Figure 1). Employing an A-B test framework, participants were randomly assigned to two identical virtual reality environments, differing only in the substitution of the VR controller with a physical jar.

Post-study surveys were administered to gauge user perceptions regarding interaction accuracy and realism, as well as their ability to recall acquired knowledge (specifically, the list of ingredients) from their virtual experience. This pilot study, conducted at the University of Florida Reality Lab, involved 12 subjects. Rigorous statistical analyses, including chi-square tests, were performed on the collected data, with detailed results outlined in this paper.

Two key findings emerged from the study: (a) the presence of the physical jar significantly heightened perceived interaction accuracy, particularly in precise liquid pouring tasks, and (b) users exhibited a remarkable 33% improvement in knowledge recall when utilizing the physical jar as opposed to a conventional VR controller. These results establish a compelling, statistically significant correlation between the integration of passive haptic objects in VR and knowledge acquisition and recall. Furthermore, this study lays the groundwork for a larger-scale study in the future.

]]>
Enhancing Museum Experience with Virtual Reality: Situating 3D Museum Collections in Context https://abarmpou.github.io/angelos/page/enhancing-museum-experience-with-virtual-reality-situating-3d-museum-collections-in-context/ Mon, 12 Feb 2024 16:37:06 +0000 https://abarmpou.github.io/angelos/?post_type=product&p=337 Read More]]> Recent advancements in photogrammetry and 3D LiDAR scanners have led to a noticeable increase in the creation of 3D scans of museum artifacts. This trend has opened up new possibilities for museums to create interactive and immersive experiences using virtual reality (VR). In collaboration with the Florida Natural History Museum, we have developed an interactive VR game that utilizes the extensive 3D digital collection of the Florida Museum. The goal is to enhance the traditional museum experience by immersing visitors in dynamic VR environments that showcase 3D museum collections within context. Specifically, our VR game highlights 3D scans of endangered species of underwater creatures in the ocean. Children can swim alongside these sea creatures while an AI conversational agent provides scientific insights. Our VR game was showcased to the public at the Florida Natural History Museum during a public outreach event, attracting visits from three K-12 school trips. We conducted field observations to evaluate children’s interactions with the VR game and conducted semi-structured interviews with children’s guardians as well as museum staff. The findings from our observations emphasized the importance of shared experiences among visitors, which could be facilitated by projecting the VR gameplay on a large screen to mitigate the isolated nature of the HMD VR experience limitations. Additionally, museum staff emphasized the significance of considering visitor traffic when designing VR experiences in museum settings. The findings also highlighted a preference for seated experiences over standing ones due to safety concerns related to children colliding with others and museum artifacts. This paper provides an overview of our design process and the challenges of implementing HMD VR in museum settings, offering valuable insights for future endeavors aimed at designing public VR educational experiences targeting children.

]]>
Reinscribing the 3rd dimension in epigraphic studies and transcending disciplinary boundaries https://abarmpou.github.io/angelos/page/reinscribing-the-3rd-dimension-in-epigraphic-studies-and-transcending-disciplinary-boundaries/ Sun, 31 Dec 2023 20:26:10 +0000 https://research.dwi.ufl.edu/projects/angelos/?post_type=product&p=300 Read More]]> Over the past decade, archaeology and epigraphy have been reconsidering their modus operandi. Prompted and facilitated by technological advances, motivated by new research questions, and challenged by growing calls to engage with contemporary audiences, they have been experimenting with methodological approaches and interdisciplinary collaborations. Within this context, the Digital Epigraphy and Archaeology project (DEA) has been developing 3D digitization techniques that accommodate various types of artifacts, has been incorporating multidisciplinary approaches to achieve a more holistic stance towards the objects of study, and has focused on the reproducibility and accessibility of both its techniques and the 3D models.

This paper presents the DEA’s introspective and reembodied ways of preserving and studying the past by reconsidering historical artifacts and their digital re-materialization. The following sections discuss the project’s approach to copies and digital copies, 3D digitization and enhanced visualization processes, comprehensive cloud services, and 3D printing to present the DEA steps toward facilitating and advancing archaeology and epigraphy. Through such approaches that combine traditional rigor with technological novelty and affordances, the team’s vision is to popularize archaeology and epigraphy within and beyond academia and pinpoint the significance of the world’s heritage to the new generations of students and the public.

]]>
Prostate Capsule Segmentation in Micro-Ultrasound Images Using Deep Neural Networks https://abarmpou.github.io/angelos/page/prostate-capsule-segmentation-in-micro-ultrasound-images-using-deep-neural-networks/ Tue, 18 Apr 2023 23:36:13 +0000 https://research.dwi.ufl.edu/projects/angelos/?post_type=product&p=287 Read More]]> Prostate cancer is the most common internal malignancy among males. Micro-Ultrasound is a promising imaging modality for cancer identification and computer-assisted visualization. Identifying the prostate capsule area is essential in active surveillance monitoring and treatment planning. In this paper, we present a pilot study that assesses prostate capsule segmentation using the U-Net deep neural network framework. To the best of our knowledge, this is the first study on prostate capsule segmentation in Micro-Ultrasound images. For our study, we collected multi-frame volumes of Micro-Ultrasound images, and then expert prostate cancer surgeons annotated the capsule border manually. The lack of clear boundaries and variation of shapes between patients make the task challenging, especially for novice Micro-Ultrasound operators. In total 2099 images were collected from 8 subjects, 1296 of which were manually annotated and were split into a training set (1008), a validation set (112), and a test set from a different subject (176). The performance of the model was evaluated by calculating the Intersection over Union (IoU) between the manually annotated area of the capsule and the segmentation mask computed from the trained deep neural network. The results demonstrate high IoU values for the training set (95.05%), the validation set (93.18%) and the test set from a separate subject (85.14%). In 10-fold cross-validation, IoU was 94.25%, and accuracy was 99%, validating the robustness of the model. Our pilot study demonstrates that deep neural networks can produce reliable segmentation of the prostate capsule in Micro-Ultrasound images and pave the road for the segmentation of other anatomical structures within the capsule, which will be the subject of our future studies.

]]>
Developing Mini VR Game Engines as an Engaging Learning Method for Digital Arts & Sciences https://abarmpou.github.io/angelos/page/developing-mini-vr-game-engines-as-an-engaging-learning-method-for-digital-arts-sciences/ Sat, 11 Mar 2023 23:42:04 +0000 https://research.dwi.ufl.edu/projects/angelos/?post_type=product&p=289 Read More]]> Digital Arts and Sciences curricula have been known for combining topics of emerging technologies and artistic creativity for the professional preparation of future technical artists and other creative media professionals. One of the key challenges in such an interdisciplinary curriculum is the instruction of complex technical concepts to an audience that lacks prior computer science background. This paper discusses how developing small custom virtual and augmented reality game engines can become an effective and engaging method for teaching various fundamental technical topics from Digital Arts and Sciences curricula. Based on empirical evidence, we demonstrate examples that integrate concepts from geometry, linear algebra, and computer programming to 3D modeling, animation, and procedural art. The paper also introduces an open-source framework for implementing such a curriculum in Quest VR headsets, and we provide examples of small-scale focused exercises and learning activities.

]]>
AI-driven Human Motion Classification and Analysis using Laban Movement System https://abarmpou.github.io/angelos/page/ai-driven-human-motion-classification-and-analysis-using-laban-movement-system/ Thu, 14 Jul 2022 18:30:46 +0000 https://research.dwi.ufl.edu/projects/angelos/?post_type=product&p=295 Read More]]> Human movement classification and analysis are important in the research of health sciences and the arts. Laban movement analysis is an effective method to annotate human movement in dance that describes communication and expression. Technology-supported human movement analysis employs motion sensors, infrared cameras, and other wearable devices to capture critical joints of the human skeleton and facial key points. However, the aforementioned technologies are not mainstream, and the most popular form of motion capture is conventional video recording, usually from a single stationary camera. Such video recordings can be used to evaluate human movement or dance performance. Any methods that can systematically analyze and annotate these raw video footage would be of great importance to this field. Therefore, this research offers an analysis and comparison of AI-based computer vision methods that can annotate the human movement automatically. This study trained and compared four different machine learning algorithms (random forest, K neighbors, neural network, and decision tree) through supervised learning on existing video datasets of dance performances. The developed system was able to automatically produce annotation in the four dimensions (effort, space, shape, body) of Laban movement analysis. The results demonstrate accurately produced annotations in comparison to manually entered ground truth Laban annotation.

]]>