Forgot your password?
typodupeerror

Comment low hanging fruit (Score 1) 57

I'm thinking that traditional manufacturing hasn't had the money or the knowhow to push automation in ways that have only recently become possible. Amazon is heavily into automation with over 1 million robots in its fulfillment centers and warehouses worldwide. They will scoop up low performers and boost them, easy money.

Comment openclaw, is that you? (Score 1) 19

OpenAI says they want to ' focus on creating so-called “agentic” AI capabilities within the new superapp, in which artificial-intelligence systems can work autonomously on a user’s computer to carry out a variety of tasks'. It sounds suspiciously like OpenClaw (https://openclaw.ai/).

No surprise there. OpenAi recently acquihired the creator of OpenClaw.

Comment Very likely (Score 1) 43

" last year Nvidia saw about $500 billion in demand for its Blackwell and upcoming Rubin chips through 2026"

Very good revenue, and then this next generation already available that is "3.5x faster than the Blackwell architecture on model-training tasks and 5x faster on inference tasks". If you are building AI data centers this translates to very serious money. The Rubin chip architecture could easily be worth twice the price of Blackwell.

Submission + - the highest-paying jobs have the worst scores (fortune.com)

ZipNada writes: Over the weekend, the OpenAI cofounder and former director of AI at Tesla posted a graphic showing how susceptible every occupation is to Al and automation, using Bureau of Labor Statistics data. Different jobs received scores on a scale of 0 to 10, with 10 being most exposed.

While the overall weighted exposure was 4.9, Karpathy’s data also showed that professions earning more than $100,000 a year had the worst average score (6.7), while the those earning less than $35,000 had the lowest exposure (3.4).

Comment "now barely programming" (Score -1, Troll) 150

"the refuseniks are deluding themselves when they claim that A.I. doesn't work well and that it can't work well" and they sure are. AI code generation works very well indeed, the recent models are pretty incredible in my experience.

But I just plod along with an IDE and an integrated LLM chat panel to tell it what to do. Tell it to propose some solutions for my objectives, choose one to take a flyer at, have it write a phased implementation plan, walk it though the steps and smoke test it all along the way. If you like the way things turned out you can easily get a suite of rigorous regression tests to make it stay that way. If you don't like it? No great loss, try plan B or C now that you are better informed.

A huge chunk of work can happen in a day. I'm at least 10x more productive with it. At times I flog the LLM's unmercifully to get what I want, but I do have to keep an eye on my credit expenditure. You can burn through money with this tech.

Now I'm seeing articles where the more sophisticated practitioners run multiple agents working cooperatively to get things done. One writes code, another reviews and tests it, another one oversees the first two. A guy sets off a huge chain of activity by dictating some prompts through his cellphone while flying business class. This is how you work when you have an unlimited amount of AI compute resources. It is another layer of abstraction above me.

Comment Re:An ongoing rewrite (Score 1) 27

>> Why didn't they use AI to rewrite it?

They did of course. But it's too late to catch up. Let's say you have shamefully migrated from a top tier AI company to Meta in exchange for a gigantic amount of cash. You are intimately familiar with large swathes of the former employer's technology and may even have invented some of it, but you must start over from scratch.

Even with AI assistance are you going to reach parity with the firmly established leaders in a couple of months? That's the timeframe for substantial releases these days.

Comment An ongoing rewrite (Score 1) 27

From what I've heard the incredibly well compensated engineers Zuck poached from the competition have had to rewrite much of Meta's AI architecture. Presumably Meta had retained all of their corpus of training data for loading into the new model but they are chasing a rapidly moving target. Google's Gemini 2.5 is ancient history (meaning several months old). Gemini 3.1 is widely available but it isn't nearly as good at coding as the most recent Anthropic models. I'm skeptical that Meta can ever catch up.

Slashdot Top Deals

Usage: fortune -P [] -a [xsz] [Q: [file]] [rKe9] -v6[+] dataspec ... inputdir

Working...