lang icon English
Aug. 22, 2023, 9 p.m.
501

None

Brief news summary

None

The popularity and use cases of artificial intelligence (AI) have soared in recent times, largely due to the widespread adoption of ChatGPT. This powerful AI technology has made a significant impact by providing accessibility to the general public. Consequently, various coding tools based on GPT have emerged, with the potential to enhance developer productivity. As our society becomes increasingly digitized and interconnected, the security and integrity of software gain paramount importance. Therefore, it is crucial to consider the implications of leveraging AI technology to assist developers in code writing, particularly in the face of growing cyber threats. Stanford University recently conducted research titled "Do Users Write More Insecure Code with AI Assistants?", unveiling interesting insights on the impact of AI on secure coding. On one hand, AI assistants have undoubtedly eased the burden on developers, allowing them to write and ship code quickly. Similar to its implementation in other industries, AI has expedited development speed, enabling organizations to enhance efficiency and boost productivity. Over time, technological advancements and organizational adaptations have revolved around increasing the speed of development. Cloud-native technology, DevOps methodology, and continuous integration/continuous delivery (CI/CD) pipelines have evolved to accelerate software development and deployment processes. Now, AI further aids developers in producing code at an unprecedented pace. While integrating AI to reduce developers' workload and enhance development speed seems favorable, it introduces additional security risks. Security testing plays a vital role in software development but often takes a backseat to meet release timelines. This negligence is evidenced by recent research by ESG, revealing that 45% of software is released without undergoing security checks or tests, and 32% of developers completely skip security processes. Hence, the question arises: How does AI influence software security? Stanford University's research discovered that AI coding assistants have the exact impact that security professionals have concerns about. Developers who utilize AI assistants tend to produce less secure code compared to those who do not use such assistance. Paradoxically, developers leveraging AI tend to believe that they are producing more secure code, leading to a false sense of security.

These findings are not entirely surprising since AI coding assistants operate on prompts and algorithms that lack extensive project-specific or contextual understanding. Nevertheless, the industry holds optimism that these limitations will be addressed with further advancements. Nevertheless, these findings underscore the critical importance of thoroughly testing code before its release. As AI coding assistants become more prevalent, and malicious actors exploit AI to efficiently detect vulnerabilities, the need for scalable and robust software testing tools becomes amplified. As coding methodologies evolve, so should the testing approaches. Modern software security methods should heavily rely on automation and efficiently generate test cases. By incorporating self-learning AI into existing testing methods, test cases can be automatically created, utilizing information about the system under test. This approach leads to continuous improvement with each test run and reduces the manual workload, while generating intelligent test cases beyond human imagination. Integrating this form of testing into CI/CD paves the way for a scalable testing approach that can handle the volume of AI coding tools. Despite the sobering findings of the Stanford study, this approach offers hope. It allows the benefits of AI coding assistants to be harnessed without compromising security or efficiency. Coding assistants are here to stay, both as a necessity and a potential source of progress. It is crucial to evolve and adapt our testing methods to ensure security. However, the involvement of humans in the process remains the most critical element. The challenges faced by the industry are formidable, but there is hope for cybersecurity professionals. Phil Venables, Google Cloud CISO, suggests that by prioritizing artificial intelligence security and safety, the impact of AI on cybersecurity can be managed effectively. This perspective aligns with the call from the Senate Select Committee on Technology Chairman, Mark Warner, urging leading tech companies to further prioritize AI security and safety, as voluntary commitments to the Biden administration may not adequately address these concerns.


Watch video about

None

Try our premium solution and start getting clients — at no cost to you

I'm your Content Creator.
Let’s make a post or video and publish it on any social media — ready?

Language

Hot news

Nov. 2, 2025, 1:33 p.m.

Shoppers Shift Budgets and Embrace AI Ahead of Ho…

As the holiday shopping season nears, small businesses prepare for a potentially transformative period, guided by key trends from Shopify’s 2025 Global Holiday Retail Report that could shape their year-end sales success.

Nov. 2, 2025, 1:29 p.m.

Meta's AI Research Lab Releases Open-Source Langu…

Meta’s Artificial Intelligence Research Lab has made a notable advancement in fostering transparency and collaboration within AI development by launching an open-source language model.

Nov. 2, 2025, 1:26 p.m.

Ethical Considerations in AI-Driven SEO Practices

As artificial intelligence (AI) increasingly integrates into search engine optimization (SEO), it brings significant ethical considerations that must not be overlooked.

Nov. 2, 2025, 1:24 p.m.

Deepfake Livestream Misleads Viewers During Nvidi…

During Nvidia’s GPU Technology Conference (GTC) keynote on October 28, 2025, a disturbing deepfake incident occurred, raising significant concerns about AI misuse and deepfake risks.

Nov. 2, 2025, 1:17 p.m.

WPP Launches AI-Powered Marketing Platform for Br…

British advertising firm WPP announced on Thursday the launch of a new version of its AI-powered marketing platform, WPP Open Pro.

Nov. 2, 2025, 1:15 p.m.

LeapEngine Enhances Marketing Services with AI To…

LeapEngine, a progressive digital marketing agency, has significantly upgraded its full-service offerings by integrating a comprehensive suite of advanced artificial intelligence (AI) tools into its platform.

Nov. 2, 2025, 9:29 a.m.

Sora 2 Faces Legal Challenges Amid AI Video Gener…

OpenAI’s latest AI video model, Sora 2, has recently faced substantial legal and ethical challenges following its launch.

All news

AI Company

Launch your AI-powered team to automate Marketing, Sales & Growth

and get clients on autopilot — from social media and search engines. No ads needed

Begin getting your first leads today