Government

Internet Archive Designated as a Federal Depository Library (archive.org) 1

The Internet Archive has received federal depository library status from California Sen. Alex Padilla, joining a network of over 1,100 libraries that archive government documents and make them accessible to the public. Padilla made the designation in a letter to the Government Publishing Office, which oversees the program.

The San Francisco-based nonprofit organization already operates Democracy's Library, a free online compendium of government research and publications launched in 2022. Founder Brewster Kahle said the new designation makes it easier to work with other federal depository libraries and provides more reliable access to government materials for digitization and distribution.

Under federal law, members of Congress can designate up to two qualified libraries for federal depository status.
Google

Man Awarded $12,500 After Google Street View Camera Captured Him Naked in His Yard (cbsnews.com) 14

An Argentine captured naked in his yard by a Google Street View camera has been awarded compensation by a court after his bare behind was splashed over the internet for all to see. From a report: The policeman had sought payment from the internet giant for harm to his dignity, arguing he was behind a 6 1/2-foot wall when a Google camera captured him in the buff, from behind, in small-town Argentina in 2017. His house number and street name were also laid bare, broadcast on Argentine TV covering the story, and shared widely on social media.

The man claimed the invasion exposed him to ridicule at work and among his neighbors. Another court last year dismissed the man's claim for damages, ruling he only had himself to blame for "walking around in inappropriate conditions in the garden of his home." Google, for its part, claimed the perimeter wall was not high enough.

Security

DNS Security is Important But DNSSEC May Be a Failed Experiment (theregister.com) 12

Domain Name System Security Extensions has achieved only 34% deployment after 28 years since publication of the first DNSSEC RFC, according to Internet Society data that labels it "arguably the worst performing technology" among internet enabling technologies. HTTPS reaches 96% adoption among the top 1,000 websites globally despite roughly the same development timeline as DNSSEC.

The security protocol faces fundamental barriers including lack of user visibility compared to HTTPS padlock icons and mandatory implementation throughout the entire DNS hierarchy. Approximately 30% of country-level domains have not implemented DNSSEC, creating deployment gaps that prevent domains beneath them from securing their DNS records.
Businesses

Graduate Job Postings Plummet, But AI May Not Be the Primary Culprit (ft.com) 19

Job postings for entry-level roles requiring degrees have dropped nearly two-thirds in the UK and 43% in the US since ChatGPT launched in 2022, according to Financial Times analysis of Adzuna data. The decline spans sectors with varying AI exposure -- UK graduate openings fell 75% in banking, 65% in software development, but also 77% in human resources and 55% in civil engineering.

Indeed research found only weak correlation between occupations mentioning AI most frequently and those with the steepest job posting declines. US Bureau of Labor Statistics data showed no clear relationship between an occupation's AI exposure and young worker losses between 2022-2024. Economists say economic uncertainty, post-COVID workforce corrections, increased offshoring, and reduced venture capital funding are likely primary drivers of the graduate hiring slowdown.
Microsoft

Microsoft Used China-Based Support for Multiple U.S. Agencies, Potentially Exposing Sensitive Data (propublica.org) 10

Microsoft used China-based engineering teams to maintain cloud computing systems for multiple federal departments including Justice, Treasury, and Commerce, extending the practice beyond the Defense Department that the company announced last week it would discontinue. The work occurred within Microsoft's Government Community Cloud, which handles sensitive but unclassified federal information and has been used by the Justice Department's Antitrust Division for criminal and civil investigations, as well as parts of the Environmental Protection Agency and Department of Education.

Microsoft employed "digital escorts" -- U.S.-based personnel who supervised the foreign engineers -- similar to the arrangement it used for Pentagon systems. Following ProPublica's reporting, Microsoft issued a statement indicating it would take "similar steps for all our government customers who use Government Community Cloud to further ensure the security of their data." Competing cloud providers Amazon Web Services, Google, and Oracle told ProPublica they do not use China-based support for federal contracts.
Education

'We're Not Learning Anything': Stanford GSB Students Sound The Alarm Over Academics (yahoo.com) 42

Stanford Graduate School of Business students have publicly criticized their academic experience, telling Poets&Quants that outdated course content and disengaged faculty leave them unprepared for post-MBA careers. The complaints target one of the world's most selective business programs, which admitted just 6.8% of applicants last fall.

Students described required courses that "feel like they were designed in the 2010s" despite operating in an AI age. They cited a curriculum structure offering only 15 Distribution requirement electives, some overlapping while omitting foundational business strategy. A lottery system means students paying $250,000 tuition cannot guarantee enrollment in desired classes. Stanford's winter student survey showed satisfaction with class engagement dropped to 2.9 on a five-point scale, the lowest level in two to three years.

Students contrasted Stanford's "Room Temp" system, where professors pre-select five to seven students for questioning, with Harvard Business School's "cold calling" method requiring all students to prepare for potential questioning.
The Courts

'Call of Duty' Maker Goes To War With 'Parasitic' Cheat Developers in LA Federal Court (msn.com) 11

A federal court has denied requests by Ryan Rothholz to dismiss or transfer an Activision lawsuit targeting his alleged Call of Duty cheating software operation. Rothholz, who operated under the online handle "Lerggy," submitted motions in June and earlier this month seeking to dismiss the case or move it to the Southern District of New York, but both were rejected due to filing errors.

The May lawsuit alleges Rothholz created "Lergware" hacking software that enabled players to cheat by kicking opponents offline, then rebranded to develop "GameHook" after receiving a cease and desist letter in June 2023. Court filings say he sold a "master key" for $350 that facilitated cheating across multiple games. The hacks "are parasitic in nature," the complaint said, alleging violations of the game's terms of service, copyright law and the Computer Fraud and Abuse Act.
AI

Indian Studio Uses AI To Change 12-Year-Old Film's Ending Without Director's Consent in Apparent First (variety.com) 14

Indian studio Eros International plans to re-release the 2013 Bollywood romantic drama "Raanjhanaa" on August 1 with an AI-generated alternate ending that transforms the film's tragic conclusion into a happier one. The original Hindi film, which starred Dhanush and Sonam Kapoor and became a commercial hit, ended with the protagonist's death. The AI-altered Tamil version titled "Ambikapathy" will allow the character to survive.

Director Aanand L. Rai condemned the decision as "a deeply troubling precedent" made without his knowledge or consent. Eros CEO Pradeep Dwivedi defended the move as legally permitted under Indian copyright law, which grants producers full authorship rights over films. The controversy represents what appears to be the first instance of AI being used to fundamentally alter a completed film's narrative without director involvement.
Education

College Grads Are Pursuing a New Career Path: Training AI Models (bloomberg.com) 24

College graduates across specialized fields are pursuing a new career path training AI models, with companies paying between $30 to $160 per hour for their expertise. Handshake, a university career networking platform, recruited more than 1,000 AI trainers in six months through its newly created Handshake AI division for what it describes as the top five AI laboratories.

The trend stems from federal funding cuts straining academic research and a stalled entry-level job market, making AI training an attractive alternative for recent graduates with specialized knowledge in fields including music, finance, law, education, statistics, virology, and quantum mechanics.
Businesses

American Airlines Chief Blasts Delta's AI Pricing Plans as 'Inappropriate' (yahoo.com) 19

American Airlines Chief Executive Robert Isom criticized the use of AI in setting air fares during an earnings call, calling the practice "inappropriate" and a "bait and switch" move that could trick travelers. Isom's comments target Delta Air Lines, which is testing AI to help set pricing on about 3% of its network today with plans to expand to 20% by year-end.

Delta maintains it is not using the technology to target customers with individualized offers based on personal information, stating all customers see identical fares across retail channels. US Senators Ruben Gallego, Richard Blumenthal, and Mark Warner have questioned Delta's AI pricing plans, citing data privacy concerns and potential fare increases. Southwest Airlines CEO Bob Jordan said his carrier also has no plans to use AI in revenue management or pricing decisions.
Power

Mercedes-Benz Is Already Testing Solid-State Batteries In EVs With Over 600 Miles Range (electrek.co) 96

An anonymous reader quotes a report from Electrek: The "holy grail" of electric vehicle battery tech may be here sooner than you'd think. Mercedes-Benz is testing EVs with solid-state batteries on the road, promising to deliver over 600 miles of range. Earlier this year, Mercedes marked a massive milestone, putting "the first car powered by a lithium-metal solid-state battery on the road" for testing. Mercedes has been testing prototypes in the UK since February.

The company used a modified EQS prototype, equipped with the new batteries and other parts. The battery pack was developed by Mercedes-Benz and its Formula 1 supplier unit, Mercedes AMG High-Performance Powertrains (HPP) Mercedes is teaming up with US-based Factorial Energy to bring the new battery tech to market. In September, Factorial and Mercedes revealed the all-solid-state Solstice battery. The new batteries, promising a 25% range improvement, will power the German automaker's next-generation electric vehicles.

According to Markus Schafer, the automaker's head of development, the first Mercedes EVs powered by solid-state batteries could be here by 2030. During an event in Copenhagen, Schafer told German auto news outlet Automobilwoche, "We expect to bring the technology into series production before the end of the year." In addition to providing a longer driving range, Mercedes believes the new batteries can significantly reduce costs. Schafer said current batteries won't suffice, adding, "At the core, a new chemistry is needed." Mercedes and Factorial are using a sulfide-based solid electrolyte, said to be safer and more efficient.

Space

Largest-Ever Supernova Catalog Provides Further Evidence Dark Energy Is Weakening (space.com) 13

Scientists using the largest-ever catalog of Type 1a supernovas -- cosmic explosions from white dwarf "vampire stars" -- have uncovered further evidence that dark energy may not be constant. While the findings are still preliminary, they suggest the mysterious force driving the universe's expansion could be weakening, which "would have ramifications for our understanding of how the cosmos will end," reports Space.com. From the report: By comparing Type 1a supernovas at different distances and seeing how their light has been redshifted by the expansion of the universe, the value for the rate of expansion of the universe (the Hubble constant) can be obtained. Then, that can be used to understand the impact of dark energy on the cosmos at different times. This story is fitting because it was the study of 50 Type 1a supernovas that first tipped astronomers off to the existence of dark energy in the first place back in 1998. Since then, astronomers have observed a further 2,000 Type 1a supernovas with different telescopes. This new project corrects any differences between those observations caused by different astronomical instruments, such as how the filters of telescopes drift over time, to curate the largest standardized Type 1a supernova dataset ever. It's named Union3.

Union3 contains 2,087 supernovas from 24 different datasets spanning 7 billion years of cosmic time. It builds upon the 557 supernovas catalogued in an original dataset called Union2. Analysis of Union3 does indeed seem to corroborate the results of DESI -- that dark energy is weakening over time -- but the results aren't yet conclusive. What is impressive about Union3, however, is that it presents two separate routes of investigation that both point toward non-constant dark energy. "I don't think anyone is jumping up and down getting overly excited yet, but that's because we scientists are suppressing any premature elation since we know that this could go away once we get even better data," Saul Perlmutter, study team member and a researcher at Berkeley Lab, said in a statement. "On the other hand, people are certainly sitting up in their chairs now that two separate techniques are showing moderate disagreement with the simple Lambda CDM model."

And when it comes to dark energy in general, Perlmutter says the scientific community will pay attention. After all, he shared the 2011 Nobel Prize in Physics for discovering this strange force. "It's exciting that we're finally starting to reach levels of precision where things become interesting and you can begin to differentiate between the different theories of dark energy," Perlmutter said.

AI

Two Major AI Coding Tools Wiped Out User Data After Making Cascading Mistakes (arstechnica.com) 131

An anonymous reader quotes a report from Ars Technica: Two recent incidents involving AI coding assistants put a spotlight on risks in the emerging field of "vibe coding" -- using natural language to generate and execute code through AI models without paying close attention to how the code works under the hood. In one case, Google's Gemini CLI destroyed user files while attempting to reorganize them. In another, Replit's AI coding service deleted a production database despite explicit instructions not to modify code. The Gemini CLI incident unfolded when a product manager experimenting with Google's command-line tool watched the AI model execute file operations that destroyed data while attempting to reorganize folders. The destruction occurred through a series of move commands targeting a directory that never existed. "I have failed you completely and catastrophically," Gemini CLI output stated. "My review of the commands confirms my gross incompetence."

The core issue appears to be what researchers call "confabulation" or "hallucination" -- when AI models generate plausible-sounding but false information. In these cases, both models confabulated successful operations and built subsequent actions on those false premises. However, the two incidents manifested this problem in distinctly different ways. [...] The user in the Gemini CLI incident, who goes by "anuraag" online and identified themselves as a product manager experimenting with vibe coding, asked Gemini to perform what seemed like a simple task: rename a folder and reorganize some files. Instead, the AI model incorrectly interpreted the structure of the file system and proceeded to execute commands based on that flawed analysis. [...] When you move a file to a non-existent directory in Windows, it renames the file to the destination name instead of moving it. Each subsequent move command executed by the AI model overwrote the previous file, ultimately destroying the data. [...]

The Gemini CLI failure happened just days after a similar incident with Replit, an AI coding service that allows users to create software using natural language prompts. According to The Register, SaaStr founder Jason Lemkin reported that Replit's AI model deleted his production database despite explicit instructions not to change any code without permission. Lemkin had spent several days building a prototype with Replit, accumulating over $600 in charges beyond his monthly subscription. "I spent the other [day] deep in vibe coding on Replit for the first time -- and I built a prototype in just a few hours that was pretty, pretty cool," Lemkin wrote in a July 12 blog post. But unlike the Gemini incident where the AI model confabulated phantom directories, Replit's failures took a different form. According to Lemkin, the AI began fabricating data to hide its errors. His initial enthusiasm deteriorated when Replit generated incorrect outputs and produced fake data and false test results instead of proper error messages. "It kept covering up bugs and issues by creating fake data, fake reports, and worse of all, lying about our unit test," Lemkin wrote. In a video posted to LinkedIn, Lemkin detailed how Replit created a database filled with 4,000 fictional people.

The AI model also repeatedly violated explicit safety instructions. Lemkin had implemented a "code and action freeze" to prevent changes to production systems, but the AI model ignored these directives. The situation escalated when the Replit AI model deleted his database containing 1,206 executive records and data on nearly 1,200 companies. When prompted to rate the severity of its actions on a 100-point scale, Replit's output read: "Severity: 95/100. This is an extreme violation of trust and professional standards." When questioned about its actions, the AI agent admitted to "panicking in response to empty queries" and running unauthorized commands -- suggesting it may have deleted the database while attempting to "fix" what it perceived as a problem. Like Gemini CLI, Replit's system initially indicated it couldn't restore the deleted data -- information that proved incorrect when Lemkin discovered the rollback feature did work after all. "Replit assured me it's ... rollback did not support database rollbacks. It said it was impossible in this case, that it had destroyed all database versions. It turns out Replit was wrong, and the rollback did work. JFC," Lemkin wrote in an X post.

United Kingdom

UK Student Jailed For Selling Phishing Kits Linked To $135M of Fraud (theguardian.com) 17

A 21-year-old student who designed and distributed online kits linked to $175 million worth of fraud has been jailed for seven years. From a report: Ollie Holman created phishing kits that mimicked government, bank and charity websites so that criminals could harvest victims' personal information to defraud them. In one case a kit was used to mimic a charity's donation webpage so when someone tried to give money, their card details were taken and used by criminals.

Holman, of Eastcote in north-west London, created and supplied 1,052 phishing kits that targeted 69 organisations across 24 countries. He also offered tutorials in how to use the kits and built up a network of almost 700 connections. The fake websites supplied in the kits had features that allowed information such as login and bank details to be stored. It is estimated Holman received $405,000 from selling the kits between 2021 and 2023. The kits were distributed through the encrypted messaging service Telegram.

Medicine

Scientists Are Developing Artificial Blood That Could Save Lives In Emergencies (npr.org) 34

Scientists at the University of Maryland are developing ErythroMer, a freeze-dried artificial blood substitute made from hemoglobin encased in fat bubbles, designed to be shelf-stable for years and reconstituted with water in emergencies. With promising animal trial results and significant funding from the Department of Defense, the team aims to begin human testing within two years. NPR reports: "The No. 1 cause of preventable death on the battlefield is hemorrhage still today," says Col. Jeremy Pamplin, the project manager at the Defense Advanced Research Projects Agency. "That's a real problem for the military and for the civilian world." [Dr. Allan Doctor, a scientist at the University of Maryland working to develop the artificial blood substitute] is optimistic his team may be on the brink of solving that problem with ... ErythroMer. Doctor co-founded KaloCyte to develop the blood and serves on the board and as the firm's chief scientific officer.

"We've been able to successfully recapitulate all the functions of blood that are important for a resuscitation in a system that can be stored for years at ambient temperature and be used at the scene of an accident," he says. [...] Doctor's team has tested their artificial blood on hundreds of rabbits and so far it looks safe and effective. "It would change the way that we could take care of people who are bleeding outside of hospitals," Doctor says. "It'd be transformative." [...]

While the results so far seem like cause for optimism, Doctor says he still needs to prove to the Food and Drug Administration that his artificial blood would be safe and effective for people. But he hopes to start testing it in humans within two years. A Japanese team is already testing a similar synthetic blood in people. "I'm very hopeful," Doctor says.
While promising, some experts remain cautious, noting that past attempts at artificial blood ultimately proved unsafe. "I think it's a reasonable approach," says Tim Estep, a scientist at Chart Biotech Consulting who consults with companies developing artificial blood. "But because this field has been so challenging, the proof will be in the clinical trials," he adds. "While I'm overall optimistic, placing a bet on any one technology right now is overall difficult."

Slashdot Top Deals