In this capstone project, a scoping literature review of cybersecurity measures in renewable energy systems is presented. The review uses 63 peer‑reviewed articles published between 2022 and 2025 from the IEEE Xplore and ACM Digital Library to evaluate the strengths, weaknesses, and general effectiveness of current security protocols. The analysis shows that most research focuses on a few high‑impact threats, especially False Data Injection and Denial‑of‑Service attacks, which mainly exploit vulnerabilities in communication networks and Distributed Energy Resource components. Standards like IEC 61850 and the NIST Cybersecurity Framework appear recurrently in the literature. Three main research directions are identified: improving system resilience through redundancy and fault tolerance, enhancing threat detection by machine learning methods, and designing adaptive protection schemes for systems with high DER penetration. Although there is a clear effort to safeguard renewable energy infrastructure, important gaps remain, particularly in practical implementation challenges, security issues of new technologies, and the human factors in operation. Future research should address SCADA/ICS environments and specific critical infrastructure contexts and include rigorous testing under realistic conditions. These steps are vital for increasing the cybersecurity resilience of renewable energy systems against evolving threats.
There is a growing threat to the United States (U.S.) relating to the cyber domain. This paper delves into what a cyber-attack is and how it fits with UN charter Article 2(4). The proper justification for cyber warfare and the moral obligation of the U.S. to improve its cyber capabilities will also be discussed. Lastly research into the cyber threat landscape as well as the cost of cyber incidents is conducted. The focus is to enable a proper response to cyber threats from the U.S. that align with international treaties and law.
Artificial Intelligence and Machine Learning represent the future of computing, but they also represent a long-standing goal in computing theory. Through the analysis of over 50 scholarly articles and conference journals, this study aims to provide a brief overview of these technologies by studying their origin, the differences between the two technologies, and the various advantages and disadvantages their union brings to the cyber-sphere. Despite often being discussed as one large overarching technology, Artificial Intelligence and Machine Learning are more of a familial branch of technology that utilize each other to create a self-learning technological wonder. In cyber-security, these technologies are often leveraged for defensive AI systems active today, however various proof-of-concepts show just how dangerous these technologies can be when misused. Leveraging this research, this study aims to provide potential future implications for the cyber-sphere as a result of the growth of these technologies.
This paper investigates the feasibility of using consumer grade AI such as ChatGPT, Google’s Gemini, X’s Grok, and Microsoft’s Copilot to determine whether emails are likely to be phishing emails or not. An experiment is set up where each AI engine is given emails to examine, and the results of the various chat engines are compared.
Deepfake technology, initially developed for entertainment, has increasingly become a significant threat in digital forensics, misinformation, and cybercrime. This paper evaluates the effectiveness of the Autopsy deepfake detection plug-in, a forensic tool designed to identify AI-generated manipulated images and videos using Support Vector Machine (SVM) algorithms. Testing involved analyzing authentic and manipulated media within realistic forensic workflows. Results indicated that the plug-in detected approximately 45.5% of manipulated images successfully but exhibited a concerning false positive rate of 40% for authentic media. Additionally, video detection capabilities were found non-functional, and the tool lacked the integration of critical metadata analysis, limiting its forensic utility. Comparisons with specialized deepfake detection tools, such as Resemble AI, Deepware Scanner, and Sensity AI, highlighted the Autopsy plug-in’s inconsistent detection accuracy and limitations in practical scenarios. The findings highlight the necessity for further development of comprehensive, reliable forensic tools capable of addressing the evolving challenges posed by advanced deepfake technologies.
This study focused on the application of NLP particularly GPT models in order to improve on database security which is prone to certain shortcomings of traditional security approaches. The work presents the configuration, fine-tuning and assessment of a customized GPT model with domain-specific data for automating tasks like threat identification, encryption, compliance tests, and more. The use of iterative prompt engineering ensured that the model was appropriately fine-tuned to handle difficult database security issues with precision and applicability. The research pilot tested the model with 16 participants consisting of database administrators, cybersecurity practitioners, and computer science students. Feedback was collected through surveys and structured tests and analysis were made on parameters like accuracy and relevance and user satisfaction obtained. Conclusion showed that GPT model customized for database security purpose to generate recommendations outperformed the general (generic) GPT. The study showed that AI has great potential as a database security solution, however, it has its drawbacks that include limitation in the size of the dataset and challenges in niche settings. The recommendation made for future work in this thesis are to include larger datasets; to use combined inputs from vision, language generation and other modalities; and, to cover more ethical concerns. This work contributes to enhancing the database security by demonstrating the ability of AI models in fighting novel security threats.
University cybersecurity labs often require isolated environments to support malware analysis, penetration testing, and secure networking coursework. However, these same isolation requirements hinder remote access, making it difficult for students with off-campus obligations to participate fully. The COVID-19 (Coronavirus Disease 2019) pandemic has highlighted the need for secure and flexible remote access to educational labs. This paper presents the design and evaluation of a secure, scalable, and cost-effective remote access model tailored to academic cybersecurity environments. Implemented at the TB 206 cybersecurity lab at Austin Peay State University (APSU), the solution integrates pfSense an open-source firewall and router platform, Cisco-managed switches for VLAN segmentation, and Tailscale to enable zero-trust, identity-based remote connectivity [12][15]. The solution enforces strict access controls through both pfSense firewall rules and Tailscale Access Control Lists (ACLs), while leveraging open-source tools and commodity hardware. A phased deployment strategy ensured operational stability at each stage, from VLAN design to remote testing. Results demonstrate that segmented, role-based remote access can be securely implemented without exposing internal services to the public internet. The proposed methodology serves as a replicable blueprint for other institutions seeking to modernize their lab infrastructure.
Jonathan D. Lensert (Dec 2025). GENERATIVE AI SECURITY: UNDERSTANDING CONCEPTS AND FOUNDATIONS OF SECURITY
To be Updated.