Citation
Nethala, Sainag and Kampa, Sandeep and Kosna, Srinivas Reddy (2025) Cyber Security Threats of Using Generative Artificial Intelligence in Source Code Management. Journal of Informatics and Web Engineering, 4 (2). pp. 114-124. ISSN 2821-370X![]() |
Text
1568-Article Text-14439-10-10-20250523.pdf - Published Version Restricted to Repository staff only Download (553kB) |
Abstract
Generative Artificial Intelligence (Generative AI) models are now broadly used for academic writing and software development for the sake of productivity and efficiency. Concerns on the impact of Artificial Intelligence (AI) tools on academic integrity and cybersecurity grow bigger with time. Generative AI is being used for code generation, editing, and review, raising ethical and security challenges. A big concern is the involuntary introduction of vulnerabilities into codebases. They can reproduce known bugs or malicious code that compromise software integrity because of the way models are trained: on large datasets. The tools may also pose additional security threats often encountered during software development. AI models trained on public data will generate code that resembles copyrighted content, creating ownership and legal grey areas. Use of AI to delegate coding increases potential adversarial attacks and model poisoning. Addressing these challenges would therefore call for a balanced approach towards AI integrating into software development. Secure coding practices, thorough testing, continuous monitoring, and collaboration between developers, security professionals, and AI researchers should be balanced. Strong governance, regular audits, transparency in AI development, and the embedding of ethical standards in AI usage will help in ensuring it is safe and effective. Generative AI should be seen as a tool to enhance, not replace, human expertise in software development. While automation can streamline workflows, developers must remain vigilant to detect and mitigate AI-induced vulnerabilities. A proactive approach that combines human oversight with AI-driven efficiency will be key to securing the future of software development.
Item Type: | Article |
---|---|
Uncontrolled Keywords: | Generative AI, Source Code Management (SCM), Cyber Security Threats, AI-Generated Code Vulnerabilities, Code Injection Attacks, Data Poisoning |
Subjects: | Q Science > QA Mathematics > QA71-90 Instruments and machines |
Depositing User: | Ms Suzilawati Abu Samah |
Date Deposited: | 25 Jun 2025 08:08 |
Last Modified: | 25 Jun 2025 08:08 |
URII: | http://shdl.mmu.edu.my/id/eprint/14012 |
Downloads
Downloads per month over past year
![]() |