Arşiv logosu
  • Türkçe
  • English
  • Giriş
    Yeni kullanıcı mısınız? Kayıt için tıklayın. Şifrenizi mi unuttunuz?
Arşiv logosu
  • Koleksiyonlar
  • Sistem İçeriği
  • Analiz
  • Talep/Soru
  • Türkçe
  • English
  • Giriş
    Yeni kullanıcı mısınız? Kayıt için tıklayın. Şifrenizi mi unuttunuz?
  1. Ana Sayfa
  2. Yazara Göre Listele

Yazar "Tokdemir, Onur Behzat" seçeneğine göre listele

Listeleniyor 1 - 1 / 1
Sayfa Başına Sonuç
Sıralama seçenekleri
  • Yükleniyor...
    Küçük Resim
    Öğe
    Strategic Energy Project Investment Decisions Using RoBERTa: A Framework for Efficient Infrastructure Evaluation
    (2026) Özkan, Recep; Tokdemir, Onur Behzat; Toğan, Vedat; Kadıoğlu, Fethi; Mostofi, Fatemeh
    The task of identifying high-value projects from vast investment portfolios presents a major challenge in the construction industry, particularly within the energy sector, where decision-making carries high financial and operational stakes. This complexity is driven by both the volume and heterogeneity of project documentation, as well as the multidimensional criteria used to assess project value. Despite this, research gaps remain: large language models (LLMs) as pretrained transformer encoder models are underutilized in construction project selection, especially in domains where investment precision is paramount. Existing methodologies have largely focused on multi-criteria decision-making (MCDM) frameworks, often neglecting the potential of LLMs to automate and enhance early-phase project evaluation. However, deploying LLMs for such tasks introduces high computational demands, particularly in privacy-sensitive, enterprise-level environments. This study investigates the application of the robustly optimized BERT model (RoBERTa) for identifying high-value energy infrastructure projects. Our dual objective is to (1) leverage RoBERTa’s pre-trained language architecture to extract key information from unstructured investment texts and (2) evaluate its effectiveness in enhancing project selection accuracy. We benchmark RoBERTa against several leading LLMs: BERT, DistilBERT (a distilled variant), ALBERT (a lightweight version), and XLNet (a generalized autoregressive model). All models achieved over 98% accuracy, validating their utility in this domain. RoBERTa outperformed its counterparts with an accuracy of 99.6%. DistilBERT was fastest (1025.17 s), while RoBERTa took 2060.29 s. XLNet was slowest at 4145.49 s. In conclusion, RoBERTa can be the preferred option when maximum accuracy is required, while DistilBERT can be a viable alternative under computational or resource constraints.

| Türk-Alman Üniversitesi | Kütüphane | Rehber | OAI-PMH |

Bu site Creative Commons Alıntı-Gayri Ticari-Türetilemez 4.0 Uluslararası Lisansı ile korunmaktadır.


Türk-Alman Üniversitesi, Beykoz, İstanbul, TÜRKİYE
İçerikte herhangi bir hata görürseniz lütfen bize bildirin

DSpace 7.6.1, Powered by İdeal DSpace

DSpace yazılımı telif hakkı © 2002-2026 LYRASIS

  • Çerez Ayarları
  • Gizlilik Politikası
  • Son Kullanıcı Sözleşmesi
  • Geri Bildirim