7.4.5
- Start-up Simulation Platform for University and Tertiary College SOS-IPO 大專校院創新創業實戰模擬學習平臺
The results of our school''s team are as follows: SkyEye – AI Vehicle Finder: The development of an intelligent vehicle-finding system: When a vehicle enters a gas station, charging station, or similar location, the AI recognition technology instantly identifies the license plate and uploads the data to the SkyEye system. The system then automatically cross-checks the information against a list of vehicles requested by customers for tracking, enabling users to quickly locate the vehicle and save both time and costs.
Sustainable Impact: The university's research team developed the "SkyEye-AI Vehicle Finder," an intelligent system that identifies license plates in real time when vehicles enter gas stations or charging areas. The AI system uploads data to the SkyEye platform, automatically matching it with the client's search list to quickly locate vehicles, saving time and operational costs. By integrating AI recognition and cloud-based data processing, the system improves traffic management efficiency and optimizes energy use. This aligns with SDG 7.4.5, which promotes smart technologies for sustainable energy efficiency. The project demonstrates the university's contribution to advancing intelligent systems that support energy conservation and technological sustainability.
本校團隊成果如下: 天眼-AI尋車:開發尋車系統,當車輛進入加油站或充電廠等場域時,AI辨識系統即時辨識車牌,將資料同步上傳至天眼系統,系統透過AI自動對比出客戶委託協尋的車輛名單,供顧客迅速掌握車輛位置,節省尋車時間和成本。
永續影響力: 本校研發團隊開發「天眼-AI尋車系統」,利用人工智慧即時辨識車牌,當車輛進入加油站或充電場域時,系統自動上傳資料至雲端比對客戶委託名單,協助快速定位車輛。此技術結合AI辨識與資料雲端整合,提升交通場域的運行效率與能源使用管理效益,減少不必要的能源浪費與人力成本。此成果符合SDG 7.4.5「推動智慧科技以促進能源效率」,展現學術研究在綠色科技與智慧能源應用上的創新實踐。

Evidence: https://ssp.moe.gov.tw/cases/852
- Three-Dimensional Image Defect Measurement and Analysis of Wind Turbine Blades using UAV 利用無人機於風電葉片進行三維影像缺陷測量與分析
The number and power generation of wind turbines are increasing year by year worldwide, making their inspection, maintenance, and repair more and more critical. The defects of its structural components are more common in blades. Therefore, a UAV was used to capture visible images and infrared thermal images of blades simultaneously, and instant neural graphic primitives were used to accelerate neural radiance fields training process to reconstruct the three-dimensional (3D) images from visible images and generate a 3D mesh model in this technology. In the experiments, when the range of the relative temperature grayscale value in the foreground area of the image is within the scope of 109 a.u. to 130 a.u., the average value is within the scope of 161 a.u. to 196 a.u., and the standard deviation is within the scope of 25 a.u. to 35 a.u., the blade is considered to be in a normal state. On the contrary, it is regarded as an abnormal state. Then, build a heat accumulation percentage map from the perspective images in an abnormal state and observe whether there is a defect by checking if the local minimum value appears. When the defect is observed in the image, switch the 3D image that was previously reconstructed to the corresponding perspective to confirm the actual location of the defect on the blade. Therefore, this technology established the 3D image reconstruction process and thermal image quality analysis method to provide an effective method for long-term monitoring of the quality of wind turbine blades.
Sustainable Impact: With the rapid global growth of wind turbine installations and power generation, blade inspection and maintenance have become critical. The NTOU research team developed an AI-based UAV inspection system that captures both visible and infrared thermal images of turbine blades. Using Instant Neural Graphic Primitives (Instant-NGP) to accelerate Neural Radiance Field (NeRF) training, the system reconstructs 3D blade models for precise defect localization. Experimental analysis defined specific grayscale value ranges (109–130 a.u. for full range, 161–196 a.u. for mean, and 25–35 a.u. for standard deviation) as indicators of normal thermal states, with deviations suggesting potential defects. When anomalies are detected, the reconstructed 3D model allows engineers to view the defect's exact location from corresponding angles. This integrated workflow provides an efficient and scalable method for long-term monitoring of blade integrity, significantly enhancing the reliability and sustainability of wind power systems—directly supporting SDG 7.4.5: Advancing renewable energy innovation and operational efficiency.
風力發電機組數量及發電量在全球逐年增長,使其檢測、保養與維修逐漸受到重視。其結構組件的缺陷又以葉片之缺陷較為常見,但風力發電之葉片較為龐大不易檢測。因此,本技術利用無人機同時拍攝葉片的可見光影像和紅外線熱影像,再利用Instant Neural Graphic Primitives加速類神經輻射場的訓練來根據可見光影像重建三維影像並生成三維網格模型。經由實驗發現,當熱影像中前景區域內相對溫度灰階值的全距位於灰階109 a.u.至130 a.u.區間內、平均值位於灰階161 a.u.至196 a.u.區間內、標準差也位於灰階25 a.u.至35 a.u.區間內時,葉片視為正常的狀態。反之,則視為異常的狀態。接著從影像觀察到缺陷時,再將先前重建的三維影像切換至對應視角,確認缺陷位於葉片上的實際位置。因此,由本技術所建立的三維影像重建流程與熱影像品質分析方法將可以對於風機葉片品質的長期監控提供一個有效的方法。
永續影響力:隨著全球風力發電機組數量與發電量的成長,風機葉片檢測與維修的重要性日益提升。國立臺灣海洋大學研發團隊開發出以無人機結合可見光與紅外線熱影像的葉片檢測技術,並運用Instant Neural Graphic Primitives加速類神經輻射場(NeRF)訓練,重建葉片三維影像與網格模型。透過灰階值區間分析(109–130 a.u.、161–196 a.u.、25–35 a.u.)可判斷葉片是否異常,進一步輔以三維視角定位缺陷位置。此創新技術能有效提升葉片檢測精度與維修效率,減少人力與時間成本,對風電系統的長期維運監控具有實質效益,展現智慧能源技術與永續能源管理的結合,呼應SDG 7.4.5「強化再生能源創新應用」之目標。


Evidence: https://mprp.ntou.edu.tw/p/404-1017-104735.php?Lang=zh-tw
- Night Measurement and Skeleton Recognition based on UAV by Deep Learning Algorithm 基於深度學習演算法的無人機夜間測量與人體骨架辨識
In the night state, images are not clearly recognized by computer vision. Therefore, this technology uses a UAV based a lidar to detect nighttime environmental information and recognize human characteristics. Using the background difference method and the algorithm of density-based spatial clustering of applications with noise (DBSCAN), the root mean square error and the surface state of the object are summarized as a condition to judge whether the object is human. The artificial intelligence algorithm of faster region-based convolutional neural network (Faster R-CNN) is used for skeleton recognition, and an automatic recognition system is established. As a result, the human-body recognition results of surface analysis and skeleton recognition are both up to 87.5 % in the nighttime environment. The website information of recognition results is established through the server terminal computer to present the current measured environmental state and recognize whether there is a human body in the nighttime environment. Eventually, the technology is applied to search the victims in the disaster relief environment and reduce the difficulty and time of night search and rescue.
Sustainable Impact: This project integrates LiDAR-based UAV sensing and AI-driven object detection (Faster R-CNN) to enhance night-time human recognition for disaster rescue applications.By combining background subtraction, DBSCAN clustering, and surface fitting (RMSE analysis), the system accurately distinguishes human features under low-light conditions, achieving an 87.5% recognition success rate.A web-based server platform provides real-time monitoring of environmental and detection results.The technology effectively supports night-time search and rescue operations, reducing risk and response time.This innovative integration of AI and LiDAR exemplifies the application of sustainable sensing technologies for social resilience, aligning with SDG 7.4.5—advancing smart, energy-efficient systems for environmental monitoring and safety enhancement.
由於在夜間狀態下,電腦視覺無法明確判別物體的存在,然而光達可透過雷射掃瞄而不受光線影響。因此,本技術利用無人機以光達作為感測器主軸,以探測夜間環境資訊及辨別人體特徵。利用背景差分法和基於密度的空間聚類演算法(DBSCAN),透過曲面擬合計算出均方根誤差,歸納出物體表面狀態,作為判斷物體是否為人的一個條件。然後將其點雲資料進行影像處理,並以目標檢測法(Faster R-CNN)對形貌進行訓練及辨識,根據人體和物體特徵建立自動化辨識系統。最終,在夜間環境狀態下可即時測量出人體,並獲得最佳表面分析與骨架辨識成功率均有87.5%。並且透過伺服端電腦建立網站連結,呈現當下量測的環境狀態及識別環境是否存在人體,其效益可應用在救災環境下搜索罹難者,減少夜間搜救的困難及時間。
永續影響力: 本技術以無人機搭載光達(LiDAR)感測器為核心,突破夜間環境中可見光影像辨識的限制。系統利用背景差分法與密度式聚類演算法(DBSCAN)進行空間資料分析,並結合曲面擬合與均方根誤差(RMSE)判定物體表面特性,再透過Faster R-CNN進行目標檢測與人體形貌辨識。結果顯示,在夜間環境中可達87.5%的骨架辨識成功率。此系統可即時於伺服端呈現偵測結果,應用於災害現場搜尋與救援,能有效減少夜間搜救風險與時間,展現AI與感測技術於永續安全與防災領域的創新貢獻,符合SDG 7.4.5目標。


Evidence:
https://ieeexplore.ieee.org/document/10214519