178512822
submission
BrianFagioli writes:
AI might be the future of software development, but a new report suggests weâ(TM)re not quite ready to take our hands off the wheel. Veracode has released its 2025 GenAI Code Security Report, and the findings are pretty alarming. Out of 80 carefully designed coding tasks completed by over 100 large language models, nearly 45 percent of the AI-generated code contained security flaws.
Thatâ(TM)s not a small number. These are not minor bugs, either. Weâ(TM)re talking about real vulnerabilities, with many falling under the OWASP Top 10, which highlights the most dangerous issues in modern web applications. The report found that when AI was given the option to write secure or insecure code, it picked the wrong path nearly half the time. And perhaps more concerning, things arenâ(TM)t getting better. Despite improvements in generating functional code, these models show no progress in writing more secure code.
178504750
submission
rabbitface25 writes:
Jason Pruet is working with teams across the Los Alamos to help prepare for a future in which artificial intelligence will reshape the landscape of science and security. Five years ago, he viewed AI as just another valuable tool, but because of recent advances in the power of large AI models, Pruet now believes AI will be broadly disruptive. He no longer views the technology as just a tool, but as a fundamental shift in how scientists approach problems and make discoveries. The global race humanity is now in is about how to harness the technology’s potential while mitigating its harms.
1663: This year, the Lab invested more in AI-related work than at any point in history. You’ve spoken about government investment in AI in terms of returning to a post–World War II paradigm of science for the public good. Can you expand on that?
JP: Before World War II, the government wasn’t really involved in science the way we think of it today. But after WWII, Vannevar Bush, a key figure behind the Manhattan Project, laid the groundwork for permanent government support of science and engineering. I’m paraphrasing here, but he had this beautiful quote where he said, “Just as it’s been the policy of the government to keep the frontiers of exploration open for everyone, so it’s the policy of the government that the frontiers of knowledge are open for everyone.”
That uniquely American idea helped build the American Century. After the war, Los Alamos leadership realized that the future of security and science depended on the ability to study energetic particles and nuclear reactions. The problem was that no university could do it because they didn’t have the means to build these giant machines. And the Lab couldn’t do it without the support of the universities, so they made a deal where the Atomic Energy Commission would pay for these giant facilities, like the Stanford Linear Accelerator Center. Without that kind of infrastructure, the country had no credible way of being a scientific superpower anymore.
For a variety of reasons, government support for big science has been eroding since then. Now, AI is starting to feel like the next great foundation for scientific progress. Big companies are spending billions on large machines, but the buy-in costs of working at the frontiers of AI are so high that no university has the exascale-class machines needed to run the latest AI models. We’re at a place now where we, meaning the government, can revitalize that pact by investing in the infrastructure to study AI for the public good.
1663: That’s a fascinating parallel. You mentioned the massive infrastructure required for cutting-edge AI research. Is that something universities can collaborate on with Los Alamos?