Augmented Intelligence: Integrating Augmented Reality and GPT 4.0 for Real-Time Immersive Assistance

On December 07, 2023

Researcher: Syed Muhammad Raza Rizvi

Description: The project “Augmented Intelligence,” aims to seamlessly integrates the power of GPT 4.0 with Augmented Reality (AR). The system uses an AR headset (Hololens 2) to provide a more intuitive and effective platform for users to interact with a Large Language Model like GPT 4.0, thereby revolutionizing the role of AI in aiding users across a spectrum of tasks.

Using Lasers to Make Our Infrastructure Safer

On December 06, 2023

Researcher: Rishabh Bajaj

Description: Wonder how we can use laser sensors on smartphones to make sure our civil infrastructure stays resilient and safe? Rishabh Bajaj, a PhD student at CViSS, in collaboration with the Ministry of Transportation of Ontario (MTO) is using LiDAR sensors on smartphones to evaluate the surface roughness of concrete which consequently affects the shear strength of structures.

Development of Community Data Collection Platform

On March 22, 2023

Researcher: Huaiyuan Weng

Description: We have created a bike scanning system that captures high-resolution images and point cloud data of buildings. The collected data will be utilized to analyze building features and assess potential losses caused by natural hazards.

Human-machine Collaborative and Distributive Inspection

On December 03, 2022

Researcher: Zaid Abbas Al-Sabbag

Description: To modernize how inspections are performed, we have developed a high-tech solution which allows robots and inspectors to collaborate more effectively to perform inspections using mixed reality headsets.

Defect Detection and Quantification for Visual Inspection

On May 13, 2022

Researcher: Rishab Bajaj, Max Midwinter, Zaid Abbas Al-Sabbag

Description: We propose an unsupervised semantic segmentation method (USP), based on unsupervised learning of image segmentation inspired by differentiable feature clustering coupled with a novel outlier rejection and stochastic consensus mechanism for mask refinement. Also, based on the segmentaion region, damage regions are reconstructed in 3D for quantitative evaluation.

 

Interactive Defect Quantification through Extended Reality

On September 15, 2021

Researcher: Zaid Abbas Al-Sabbag

Description: A new visual inspection method that can interactively detect and quantify structural defects using an Extended Reality (XR) device (headset) is proposed. The XR device, which is at the core of this method, supports an interactive environment using a holographic overlay of graphical information on the spatial environment and physical objects being inspected. By leveraging this capability, a novel XR-supported inspection pipeline, called eXtended Reality-based Inspection and Visualization (XRIV), is developed. Key tasks supported by this method include detecting visual damage from sensory data acquired by the XR device, estimating its size, and visualizing (overlaying) information on the spatial environment.

Project page: Github

 

Scale Estimation

On June 18, 2020

Researcher: Ju An Park

Description: Computer vision-based inspection solutions used for detection of features, such as structure components and defects often lack methods to determine scale information. Knowing image scale allows the user to quantitatively evaluate regions-of-interest to a physical scale (e.g. length/area estimations of features). To address this challenge, a learning-based scale estimation technique is proposed. The underlying assumption is that the surface texture of structures, captured in images, contains enough information to estimate scale for each corresponding image (e.g., pixel/mm). In this work, a regression model is trained to establish the relationship between surface textures, captured in images, and scales. A convolutional neural network is trained to extract scale-related features from textures captured in images. Then, the trained model can be exploited to estimate scales for all images that are captured from a structure’s surfaces with similar textures.

Project page: Github

 

Autonomous Image Localization

On September 28, 2018

Researcher: Chul Min Yeum

Description: A novel automated image localization and classification technique is developed to extract the regions-of-interest (ROIs) on each of the images, which contain the targeted region for inspection (TRI). ROIs are extracted here using structure-from-motion. Less useful ROIs, such as those corrupted by occlusions, are then filtered effectively using a robust image classification technique, based on convolutional neural networks. Then, such highly relevant ROIs are available for visual assessment. The capability of the technique is successfully demonstrated using a full-scale highway sign truss with welded connections.

Project page: Github

 

Vision-Based Automated Crack Detection

On June 13, 2015

Researcher: Chul Min Yeum

Description: A new vision-based visual inspection technique is proposed by automatically processing and analyzing a large volume of collected images from unspecified locations using computer vision algorithm. By evaluating images from many different angles and utilizing knowledge of a fault’s typical appearance and characteristics, the proposed technique can successfully detect faults on a structure.

Project page: Github

 

Lamb Wave Mode Decomposition Technique

On December 11, 2010

Researcher: Chul Min Yeum

Description: This research is to propose a new method for decomposing Lamb wave modes using concentric ring and circular PZTs. The proposed mode decomposition technique is formulated by solving 3D Lamb wave propagation equations considering the PZT size and shape, and this technique requires a specially designed dual PZT composed of concentric ring and circular PZTs. The effectiveness of the proposed technique for the Lamb wave mode decomposition is investigated through numerical simulation and experimental tests performed on an aluminum plate.

Project page: Github