Journal of Robotics and Control (JRC) https://journal.umy.ac.id/index.php/jrc <p align="justify"><strong>Journal of Robotics and Control (JRC) p-ISSN: <a href="https://portal.issn.org/resource/ISSN/2715-5056" target="_blank" rel="noopener">2715-5056</a>, e-ISSN: <a href="https://portal.issn.org/resource/ISSN/2715-5072" target="_blank" rel="noopener">2715-5072</a> </strong>is an international peer-review open-access journal published bi-monthly, six times a year by Universitas Muhammadiyah Yogyakarta in collaboration with <strong><a href="https://ptti.web.id/publication/" target="_blank" rel="noopener">Peneliti Teknologi Teknik Indonesia</a></strong>. The Journal of Robotics and Control (JRC) invites scientists and engineers worldwide to exchange and disseminate theoretical and practice-oriented topics of development and advances in <strong>robotics</strong> and <strong>control</strong> within the whole spectrum of robotics and control. <strong>Journal of Robotics and Control (JRC) </strong>has been indexed by <strong><a href="https://www.scopus.com/sourceid/21101058819" target="_blank" rel="noopener">SCOPUS</a></strong> and is available in <strong><a href="https://www.scimagojr.com/journalsearch.php?q=21101058819&amp;tip=sid&amp;clean=0" target="_blank" rel="noopener">SCIMAGO</a></strong>.</p> <table class="data" width="100%" bgcolor="#f0f0f0"> <tbody> <tr valign="top"> <td width="20%">Journal title</td> <td width="80%"><strong> Journal of Robotics and Control (JRC)</strong></td> </tr> <tr valign="top"> <td width="20%">Abbreviation</td> <td width="80%"> <strong>JRC</strong></td> </tr> <tr valign="top"> <td width="20%">Frequency</td> <td width="80%"><strong> 6 issues per year</strong></td> </tr> <tr valign="top"> <td width="20%">Type of Review</td> <td width="80%"><strong> Double Blind Review</strong><strong><br /></strong></td> </tr> <tr valign="top"> <td width="20%">Print ISSN</td> <td width="80%"> <a href="https://portal.issn.org/resource/ISSN/2715-5056" target="_blank" rel="noopener"><strong>2715-5056</strong></a></td> </tr> <tr valign="top"> <td width="20%">Online ISSN</td> <td width="80%"> <a href="https://portal.issn.org/resource/ISSN/2715-5072" target="_blank" rel="noopener"><strong>2715-5072</strong></a></td> </tr> <tr valign="top"> <td width="20%">Editor</td> <td width="80%"> <strong>See</strong> <a href="https://journal.umy.ac.id/index.php/jrc/management/settings/context//index.php/jrc/about/editorialTeam" target="_self"><strong>Editor</strong></a></td> </tr> <tr valign="top"> <td width="20%">Publisher</td> <td width="80%"> <a href="http://www.umy.ac.id/" target="_blank" rel="noopener"><strong>Universitas Muhammadiyah Yogyakarta</strong></a>, in collaboration with <a href="https://ptti.web.id/publication/" target="_blank" rel="noopener"><strong>Peneliti Teknologi Teknik Indonesia (PTTI)</strong></a></td> </tr> <tr valign="top"> <td width="20%">Organizer</td> <td width="80%"> <a href="https://ptti.web.id/journal/" target="_blank" rel="noopener"><strong>Peneliti Teknologi Teknik Indonesia (PTTI)</strong></a></td> </tr> <tr valign="top"> <td width="20%">Citation Analysis</td> <td width="80%"> <strong><a href="https://scholar.google.co.id/citations?view_op=list_works&amp;hl=en&amp;user=3-o13vEAAAAJ" target="_blank" rel="noopener">Google Scholar</a> | <a href="https://www.scopus.com/sourceid/21101058819" target="_blank" rel="noopener">Scopus</a> | <a href="https://app.dimensions.ai/discover/publication?search_mode=content&amp;and_facet_source_title=jour.1385953" target="_blank" rel="noopener">Dimensions</a> | <a href="https://www.scimagojr.com/journalsearch.php?q=21101058819&amp;tip=sid&amp;clean=0" target="_blank" rel="noopener">Scimago</a> <strong>|</strong> <a href="https://journal.umy.ac.id/index.php/jrc/management/settings/context//index.php/jrc/pages/view/wos_citation" target="_blank" rel="noopener">Web of Science</a></strong></td> </tr> <tr valign="top"> <td width="20%">Abstracting &amp; Indexing</td> <td width="80%"> <a href="https://www.ebsco.com/m/ee/Marketing/titleLists/aci-coverage.htm" target="_blank" rel="noopener"><strong>EBSCO</strong></a></td> </tr> <tr valign="top"> <td width="20%">Digital Marketing</td> <td width="80%"> <strong><a href="https://mail.cloudmatika.com/" target="_blank" rel="noopener">Direct Email</a> | <a href="https://www.youtube.com/c/AlfianCenter" target="_blank" rel="noopener">Youtube Channel</a> | <a href="https://www.instagram.com/portalpublikasi/" target="_blank" rel="noopener">Instagram</a> | Twitter</strong></td> </tr> </tbody> </table> <p> </p> <table class="data" width="100%" bgcolor="#f0f0f0"> <thead> <tr> <th style="text-align: center;" width="33%">Time to First Decision</th> <th style="text-align: center;" width="33%">Review Time</th> <th style="text-align: center;" width="33%">Publication Time</th> </tr> <tr> <th style="text-align: center;" width="30%">2-4 Weeks</th> <th style="text-align: center;" width="33%">4-8 weeks</th> <th style="text-align: center;" width="33%">4-8 Weeks</th> </tr> </thead> </table> <p align="justify">Scopus Quartile = Q1 || Cite Score = 6.5 || SJR = 0.435 || SNIP = 1.130</p> <p align="justify"><a href="https://www.scopus.com/sourceid/21101058819" target="_blank" rel="noopener noreferrer" data-saferedirecturl="https://www.google.com/url?q=https://www.scopus.com/sourceid/21101058819&amp;source=gmail&amp;ust=1737203414210000&amp;usg=AOvVaw3akBV3LMePoZTNzdn1jpYr">https://www.scopus.com/sourcei<wbr />d/21101058819</a></p> <p align="justify">Scimagojr Quartile = Q2 || SJR = 0.44</p> <p align="justify"><a href="https://www.scimagojr.com/journalsearch.php?q=21101058819&amp;tip=sid&amp;clean=0" target="_blank" rel="noopener noreferrer" data-saferedirecturl="https://www.google.com/url?q=https://www.scimagojr.com/journalsearch.php?q%3D21101058819%26tip%3Dsid%26clean%3D0&amp;source=gmail&amp;ust=1737203414210000&amp;usg=AOvVaw0P4DYXYUSf6Tbyt103S-BZ">https://www.scimagojr.com/jour<wbr />nalsearch.php?q=21101058819&amp;<wbr />tip=sid&amp;clean=0</a></p> <table> <thead> <tr> <td> <p align="justify"> </p> <div style="height: 100px; width: 180px; font-family: Arial, Verdana, helvetica, sans-serif; background-color: #ffffff; display: inline-block;"> <div style="padding: 0px 16px;"> <div style="font-size: 12px; text-align: right;"> <div style="height: 100px; width: 180px; font-family: Arial, Verdana, helvetica, sans-serif; background-color: #ffffff; display: inline-block;"> <div style="padding: 0px 16px;"> <div style="padding-top: 3px; line-height: 1;"> <div style="float: left; font-size: 28px;"><span id="citescoreVal" style="letter-spacing: -2px; display: inline-block; padding-top: 7px; line-height: .75;">6.5</span></div> <div style="float: right; font-size: 14px; padding-top: 3px; text-align: right;"><span id="citescoreYearVal" style="display: block;">2024</span>CiteScore</div> </div> <div style="clear: both;"> </div> <div style="padding-top: 3px;"> <div style="height: 4px; background-color: #dcdcdc;"> <div id="percentActBar" style="height: 4px; background-color: #0056d6; width: 77%;"> </div> </div> <div style="font-size: 11px;"><span id="citescorePerVal">77th percentile</span></div> </div> <div style="font-size: 12px; text-align: right;">Powered by <img style="width: 50px; height: 15px;" src="https://www.scopus.com/static/images/scopusLogoOrange.svg" alt="Scopus" /></div> </div> </div> </div> </div> </div> </td> <td> </td> <td> <p> <a title="SCImago Journal &amp; Country Rank" href="https://www.scimagojr.com/journalsearch.php?q=21101058819&amp;tip=sid&amp;exact=no"><img src="https://www.scimagojr.com/journal_img.php?id=21101058819" alt="SCImago Journal &amp; Country Rank" border="0" /></a></p> </td> </tr> </thead> </table> <p align="justify"><strong>Submit the paper through Online Submission Only </strong><a href="https://journal.umy.ac.id/index.php/jrc/login">LOG IN</a> or <a href="https://journal.umy.ac.id/index.php/jrc/user/register?source=">REGISTRATION</a>. Don't forget to check the author section tick when registering, or if you forget, please change in my profile menu or contact the available contact.</p> <p><strong>Kindly please download the Journal Article Template here: </strong><a href="https://drive.google.com/file/d/19w7M7cFE9LIsopb5PyWGmErbuu2Qi6pG/view" target="_blank" rel="noopener">DOCX</a> or <a href="https://drive.google.com/file/d/1HcVaxJlHUW2Ol08jBD37hHpc7n1YVihz/view?usp=sharing" target="_blank" rel="noopener">LATEX</a>.</p> <p align="justify">Registration and login are required to submit items online and check the current submissions' status. Submitted manuscripts must never have been published before. In writing an English script, you must use the correct grammar rules. For further information, please contact jrcofumy@gmail.com.</p> en-US <p>Authors who publish with this journal agree to the following terms: </p><ol type="a"><li>Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a <a href="https://creativecommons.org/licenses/by-sa/4.0/">Creative Commons Attribution License</a> that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this journal.</li><li>Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this journal.</li><li>Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See <a href="http://opcit.eprints.org/oacitation-biblio.html" target="_new">The Effect of Open Access</a>).</li></ol><p> </p><p dir="ltr">This journal is based on the work at <a href="/index.php/jrc">https://journal.umy.ac.id/index.php/jrc </a>under license from <a href="https://creativecommons.org/licenses/by-sa/4.0/">Creative Commons Attribution-ShareAlike 4.0 International License</a>. You are free to:</p><ol><li><strong>Share</strong> – copy and redistribute the material in any medium or format.</li><li><strong>Adapt</strong> – remix, transform, and build upon the material for any purpose, even comercially.</li></ol><p dir="ltr">The licensor cannot revoke these freedoms as long as you follow the license terms, which include the following:</p><ol><li><strong>Attribution</strong>. <span>You must give appropriate credit</span><span>, provide a link to the license, and indicate if changes were made.</span><span> You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.</span></li><li><strong>ShareAlike. </strong>If you remix, transform, or build upon the material, you must distribute your contributions under the same license as the original.</li><li><strong>No additional restrictions</strong>. <span>You may not apply legal terms or technological measures</span><span> that legally restrict others from doing anything the license permits.</span></li></ol><p dir="ltr"> </p><p>• Creative Commons Attribution-ShareAlike (CC BY-SA)</p><p><a href="http://creativecommons.org/licenses/by-sa/4.0/" rel="license"><img style="border-width: 0;" src="https://i.creativecommons.org/l/by-sa/4.0/88x31.png" alt="Creative Commons License" /></a><br />JRC is licensed under an <a href="http://creativecommons.org/licenses/by-sa/4.0/" rel="license">International License</a></p> jrcofumy@gmail.com (Journal of Robotics and Control (JRC) Editor) jrcofumy@gmail.com (Journal of Robotics and Control (JRC) Editor) Wed, 13 Aug 2025 20:41:21 +0700 OJS 3.2.1.5 http://blogs.law.harvard.edu/tech/rss 60 An Explainable CNN–LSTM Framework for Monthly Crude Oil Price Forecasting Using WTI Time Series Data https://journal.umy.ac.id/index.php/jrc/article/view/26609 <p>Crude oil price forecasting has posed significant challenges due to its volatility and nonlinear dynamics. This study has proposed an explainable CNN–LSTM framework to predict monthly West Texas Intermediate (WTI) crude oil prices. The model has captured both local and sequential patterns without using external inputs or decomposition. Trained over 50 epochs across three data splits, it has been evaluated using RMSE, MAE, MASE, SMAPE, and directional accuracy. A classification accuracy of 92.4% and directional accuracy of up to 87.4% have been achieved. The model has consistently outperformed classical and hybrid baselines, with statistical significance confirmed by the Friedman–Nemenyi test. Saliency-based interpretability has further enhanced transparency, making the framework suitable for real-world energy forecasting.</p> Joompol Thongjamroon, Songgrod Phimphisan, Nattavut Sriwiboon Copyright (c) 2025 Joompol Thongjamroon, Songgrod Phimphisan, Nattavut Sriwiboon https://creativecommons.org/licenses/by-sa/4.0 https://journal.umy.ac.id/index.php/jrc/article/view/26609 Wed, 13 Aug 2025 00:00:00 +0700 The Emerging Role of Artificial Intelligence in Identifying Epileptogenic Zone: A Systematic Literature Review https://journal.umy.ac.id/index.php/jrc/article/view/27281 <p>Identifying epileptogenic zones (EZs) is a crucial step in the pre-surgical evaluation of drug-resistant epilepsy patients. Conventional methods, including EEG/SEEG visual inspection and neurofunctional imaging, often face challenges in accuracy, reproducibility, and subjectivity. The rapid development of artificial intelligence (AI) technologies in signal processing and neuroscience has enabled their growing use in detecting epileptogenic zones. This systematic review aims to explore recent developments in AI applications for localizing epileptogenic zones, focusing on algorithm types, dataset characteristics, and performance outcomes. A comprehensive literature search was conducted in 2025 across databases such as ScienceDirect, Springer Nature, and IEEE Xplore using relevant keyword combinations. The study selection followed PRISMA guidelines, resulting in 34 scientific articles published between 2020 and 2024. Extracted data included AI methods, algorithm types, dataset modalities, and performance metrics (accuracy, AUC, sensitivity, and F1-score). Results showed that deep learning was the most used approach (44%), followed by machine learning (35%), multi-methods (18%), and knowledge-based systems (3%). CNN and ANN were the most commonly applied algorithms, particularly in scalp EEG and SEEG-based studies. Datasets ranged from public sources (Bonn, CHB-MIT) to high-resolution clinical SEEG recordings. Multimodal and hybrid models demonstrated superior performance, with several studies achieving accuracy rates above 98%. This review confirms that AI (especially deep learning with SEEG and multimodal integration) has strong potential to improve the precision, efficiency, and scalability of EZ detection. To facilitate clinical adoption, future research should focus on standardizing data pipelines, validating AI models in real-world settings, and developing explainable, ethically responsible AI systems.</p> Yuri Pamungkas, Riva Satya Radiansyah, Stralen Pratasik, Made Krisnanda, Natan Derek Copyright (c) 2025 Yuri Pamungkas, Riva Satya Radiansyah, Stralen Pratasik, Made Krisnanda, Natan Derek https://creativecommons.org/licenses/by-sa/4.0 https://journal.umy.ac.id/index.php/jrc/article/view/27281 Fri, 15 Aug 2025 00:00:00 +0700 Dynamic Clustering of Multi-Mobile Robot System using Gaussian Mixture Model https://journal.umy.ac.id/index.php/jrc/article/view/27184 <p>Managing large fleets of mobile robots poses significant challenges to system coordination and workload. An effective grouping strategy is crucial for enhancing operational performance and scalability. This paper introduces a two-stage dynamic clustering method (DCM), a novel framework for organizing robots into manageable groups. The methodology utilizes a Gaussian Mixture Model and the Expectation-Maximization algorithm to cluster robots based on their path intersection points. A unique "cost" parameter, formulated a least squares objective function, is proposed to guide the selection of near-optimal, workload-balanced configurations. The results from extensive simulations demonstrated the framework's effectiveness. On a single dataset, DCM exhibited exceptional reliability, maintaining a stable objective function value even as the number of robots per cluster fluctuated across runs. A sensitivity analysis over multiple unique datasets confirmed the model's adaptive strength, showing its ability to re-configure clusters. This adaptability was highlighted by the mean objective function value varying across different scenarios. Further analysis involving reduced robot populations and obstacle-filled environments validated DCM's generalizability and environment-independent nature. The robot distribution mechanism was consistently equitable and balanced. Statistical validation, including bootstrapping resamples, confirmed the stability and reliability of the performance estimates. The method also steadily maintained a high level of performance by adapting to internal variations. Moreover, every robot was successfully assigned to all clusters across all trials. The research concludes that DCM is a robust, adaptive, and environment-independent framework. It successfully balances performance stability with the flexibility to respond to new operational conditions, proving it is an effective solution for multi-robot coordination.</p> Hung Truong Xuan, Thang Pham Manh, Nha Nguyen Quang, Hanh Nguyen Thi Hong Copyright (c) 2025 Hung Truong Xuan, Thang Pham Manh, Nha Nguyen Quang, Hanh Nguyen Thi Hong https://creativecommons.org/licenses/by-sa/4.0 https://journal.umy.ac.id/index.php/jrc/article/view/27184 Fri, 15 Aug 2025 00:00:00 +0700 Computer Vision for Food Nutrition Assessment: A Bibliometric Analysis and Technical Review https://journal.umy.ac.id/index.php/jrc/article/view/27525 <p>This study examines the latest trends, challenges, and advances in food image segmentation and computer vision-based nutritional analysis. Traditional nutritional assessment methods such as food diaries and questionnaires are limited by their reliance on participant recall and manual processing, which reduces their accuracy and efficiency. As an alternative, advances in machine learning and deep learning have shown potential in automating food identification and estimating nutrient content, such as calories, protein, carbohydrates, and fat. This study was conducted through bibliometric analysis and technical review of publications from the Scopus database, using a structured search strategy and applying inclusion and exclusion criteria. Articles were selected based on topic relevance, use of machine learning or deep learning methods, publication in English, and publication between 2020 and 2024. The review identified key research trends, key contributors, popular methods such as CNN and YOLO, and the most frequently reported limitations, including lack of dataset diversity, inaccuracy in food volume estimation, and the need for real-time integrated systems. These limitations were analyzed based on the methodology and findings of the reviewed studies. This review is expected to be a comprehensive reference for researchers and practitioners in developing food image segmentation technology for more accurate and applicable nutritional assessment.</p> Nani Purwati, R. Rizal Isnanto, Martha Irene Kartasurya Copyright (c) 2025 Nani Purwati, R. Rizal Isnanto, Martha Irene Kartasurya https://creativecommons.org/licenses/by-sa/4.0 https://journal.umy.ac.id/index.php/jrc/article/view/27525 Thu, 21 Aug 2025 00:00:00 +0700 Predicting Occupational Heat Stress in Critical Sectors: A Sector-Based Systematic Review of Wearable Sensing, IoT Platforms, and Machine Learning Models https://journal.umy.ac.id/index.php/jrc/article/view/27377 <p>Occupational heat stress is a growing threat to the health and productivity of workers exposed to extreme environmental conditions. This issue is particularly acute in sectors such as construction, mining, agriculture, and heavy industry, where high heat exposure and physical workload are constant. This systematic review analyzes 96 scientific articles published in recent years, aiming to identify emerging technological systems focused on the prediction, monitoring, and mitigation of occupational heat stress. The main contribution of this study lies in the cross-sectoral categorization of recent solutions, providing a comparative framework that highlights knowledge gaps, methodological limitations, and opportunities for innovation. Following PRISMA guidelines, data were extracted on sensor type, predictive models, validation environments, and the sector of application. Technologies were classified into five main categories: wearable sensors, IoT-based monitoring platforms, hybrid thermal indices, predictive models based on environmental and physiological inputs, and decision-support tools. The results reveal a strong presence of wearable systems. Adoption is further constrained by socio-technical barriers such as worker compliance, PPE burden, costs, data privacy, and interoperability gaps. However, only a small fraction of studies conducted in-field validation under real thermal stress conditions, and even fewer included longitudinal ergonomic trials, limiting generalizability, with additional concerns about heterogeneous outcome measures and inconsistent definitions of heat stress across studies. A sectoral imbalance is also observed, with construction and industrial environments receiving more research attention than mining, agriculture, and indoor workplaces. In conclusion, we propose a practical roadmap for the adoption of standardized data schemas and protocols, field trials across complete work cycles, privacy-preserving analytics (federated learning), and integration of ergonomic and organizational controls. In highly humid or high radiation settings, complementing or replacing WBGT with hybrid indices (UTCI) can improve risk estimation and enable more actionable work rest and hydration alerts.</p> Roger Fernando Asto Bonifacio, Blanca Yeraldine Buendia Milla, Jezzy James Huaman Rojas Copyright (c) 2025 Roger Fernando Asto Bonifacio, Blanca Yeraldine Buendia Milla, Jezzy James Huaman Rojas https://creativecommons.org/licenses/by-sa/4.0 https://journal.umy.ac.id/index.php/jrc/article/view/27377 Sat, 23 Aug 2025 00:00:00 +0700 Sensor Fusion and Predictive Control for Adaptive Vehicle Headlamp Alignment: A Comparative Analysis https://journal.umy.ac.id/index.php/jrc/article/view/26740 <p>Nighttime driving safety is often compromised by the inability of conventional adaptive headlamp systems to account for lateral slip and rapidly changing road conditions, leading to misalignment and reduced visibility during aggressive maneuvers. Most existing approaches rely solely on steering angle, which limits adaptability under dynamic slip scenarios. This study presents the development and comparative evaluation of a Fused Controller that uniquely integrates sensor fusion, adaptive gain scheduling, and multi-step predictive optimization for robust adaptive headlamp alignment. Five control architectures- Filtered Proportional Controller (FPC), Raw State MPC (RS-MPC), Extended MPC (E-MPC), Feedforward-Enhanced MPC (FF-MPC), and the proposed Fused Controller- were systematically evaluated on a 2 km synthetic road with ten challenging segments. Compared to the E-MPC baseline, the Fused Controller achieved a 42.5% reduction in root mean square error (RMSE) in long S-curves and a 30.6% improvement in sharp turns, with a settling time of 0.6 s (versus 1.8 s for FPC) and a jitter index of 9.93°/s. Frequency-domain analysis confirmed a 1.2 Hz bandwidth with actuator-compatible roll-off, and stability analysis validated robustness under noise and disturbances. Statistical analysis across 20 independent simulation runs per controller showed these improvements are highly significant (p &lt; 0.001, large Cohen’s d), confirming the practical superiority of the Fused Controller. These results indicate enhanced driver visibility and reduced nighttime collision risk, while the controller’s computational efficiency and adaptive gains support scalability and real-world deployment. This work provides a rigorous and practical framework for next-generation adaptive lighting systems.</p> Glenson Toney, Gaurav Sethi, Cherry Bhargava, Aldrin Claytus Vaz, Navya Thirumaleshwar Hegde Copyright (c) 2025 Glenson Toney, Gaurav Sethi, Cherry Bhargava, Aldrin Claytus Vaz, Navya Thirumaleshwar Hegde https://creativecommons.org/licenses/by-sa/4.0 https://journal.umy.ac.id/index.php/jrc/article/view/26740 Fri, 29 Aug 2025 00:00:00 +0700