Yazar "Lambrecht, Jens" seçeneğine göre listele
Listeleniyor 1 - 3 / 3
Sayfa Başına Sonuç
Sıralama seçenekleri
Öğe Design of a Mobile Data Collection Robot for Learning-based Localization and Autonomous Driving(Institute of Electrical and Electronics Engineers Inc., 2023) Baykar, Ali Omer; Lambrecht, Jens; Kural, Ayhan; Uygur, Selcuk Eray; Yildiz, AhmetThis study introduces a mobile robot capable of collecting position and corresponding visual data seamlessly from both indoor and outdoor settings within the same sequence. The mobile robot has been specifically designed to navigate obstacles such as stairs and steps during transitions between indoor and outdoor environments. To accomplish this, the robot incorporates differential driving dynamics and is equipped with essential sensors including two stereo cameras, LIDAR, IMU, and GNSS. The entire system operates on the Robot Operating System (ROS). Consequently, it becomes possible to create a comprehensive dataset that encompasses not only the routes traversed by mobile vehicles but also includes all vehicle and pedestrian roads, as well as indoor spaces, found within a campus environment. © 2023 IEEE.Öğe Evaluation of Feature Detection and Extraction Methods for Landmark Separability in Visual Localization(Institute of Electrical and Electronics Engineers Inc., 2023) Baykar, Ali Ömer; Lambrecht, Jens; Kural, AyhanIn this study, the utilization of feature detector methods in landmark-based visual localization research has been evaluated. In contrast to comparative studies of existing feature detection methods, their performance has been assessed in detecting and discerning similarities among environments at distinct locations that pose challenges for perception. Additionally, variables such as varying lighting conditions, camera exposure time, and sensor light sensitivity (ISO value), as well as focus range, have been incorporated into the evaluation. BRISK, Fast, Harris, MinEigen, MSER, ORB, SIFT, SURF feature detector, and descriptor methods have been considered in this assessment. BRISK and MinEigen, notably, have demonstrated superior performance in detecting objects that could serve as landmarks in extremely unambiguous environments compared to other methods. Furthermore, it has been observed that in situations with excessive measurements in camera exposure time, more features can be detected. Additionally, an increase in image sensor light sensitivity has led to a reduction in the number of detected features across all methods. Another noteworthy finding is the dramatic decrease in matched features observed in images captured under varying lighting conditions and the same positions. © 2023 IEEE.Öğe Neuronal Networks for Visual Inspection of Assembly Completeness and Correctness in Manufacturing(Springer Science and Business Media Deutschland GmbH, 2024) Baykar, Ali Ömer; Kural, Ayhan; Lambrecht, JensThe conformity of the quality with the desired specifications in production must be controlled quickly, reliably and accurately. Cost reduction and efficiency studies in production quality control stages are of great importance today. For this reason, non-human and intelligent automated systems are the main research subjects as a solution method in quality control stages. In this study, the final visual inspection of fastening elements of an industrial product is addressed. The inspection of connection elements, such as screws, as one of these quality control stages, is presented through a framework utilizing a camera and learnable neural network, replacing human-eye control. Fasteners can be counted as small objects in the images obtained. Therefore, in this study, object detectors based on different CNN backbones (ResNet 50–101) and proposals are discussed and their performance in detecting these small objects is compared to achieve the high detection speed, accuracy and reliability. To address the challenges at an industrial level for object detection methods, a non-processed image dataset has been created. This dataset aims to represent various lighting conditions, including dark-bright fields and diffuse reflection, as well as occlusion and restricted camera angles. During the training phase, hyperparameter-tuning optimization of deep networks such as YOLOv8, Faster-RCNN with ResNet50&101 and lastly Sparse-RCNN with a different set of learned object proposals is evaluated, which can be most suitable for the detection of screw connection. Experimental results show that the pretrained Faster-RCNN and Sparse RCNN has over the % 85 success rate of detection of small objects in an industrial environment. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024.