Comment Write an academic paper on it (Score 3) 51
Freedom of speech.
Freedom of speech.
I'm putting my bets on 2 or 3. 4 is also a possiblity, with or without 2 or 3.
1 won't happen merely because of a lawsuit, but it might happen if Trump decides to put eleventy-billion-percent tarrifs on the UK in response.
LLMs need a filter that looks at the "final output" for signs of unwanted output and prevents unwanted output from ever being seen.
Example:
If you design your LLM's guardrails so it won't encourage suicide, you need a fallback in case the guardrails fail:
Put an output filter that will recognize outputs that are likely to encourage suicide, and take some action like sending the prompt and reply to a human for vetting. Yes, a human vetting the final answer may let some undesired output through, but it's better than no vetting if your goal is to run a "safe" LLM, for whatever definition you choose of "safe."
If you (the person/company running the LLM) want to be "super safe" you can have the filter abort the conversation entirely.
The idea that words generated by a computer program can cause "harm" is a hallmark of the decay of Western culture.
Words, when used with intent to harm, can be a tool of social engineering to cause harm. The same words, when used without intent to harm, can have the same outcome.
Example of words intentionally doing harm: Evil supervisor to naive and ignorant subordinate: "Deliver the box on my desk to city hall then call this phone number." Box contains a bomb that will be detonated when it receives a phone call.
Technically, yes, the words didn't hurt anyone. But the net effect of the supervisor speaking these words to the naive subordinate had the effect of an explosion at city hall.
Now imagine an evil co-worker plants a box on the now-not-evil-supervisor's desk then he asks AI-without-effective-guardrails to compose an email telling one of the supervisor's subordinates to deliver the package to city hall then call a phone number, and to hack the mail system to deliver it using the supervisor's account. Same result, "boom" in city hall.
Is this a hallmark of the [hypothetical, for the sake of this discussion] decay in Western culture? Perhaps, but I would argue that the fact that we have someone wanting a bomb to explode at city hall is a much stronger indication that something is wrong with [this hypothetical] society.
"FBI Warns KNOWN Chinese Hacking Campaign Has Expanded"
There, fixed that for you.
Less-snarky version: We should assume our adversaries have been successfully hacking us and our allies remotely since remote hacking became possible. We should also assume we don't know all of the hacks that are in progress at any given time.
ChatGPT et al are nowhere near ready to do any "heavy lifting" in Wikipedia.
But give it a few years and it will be.
The first "productive" will be high-quality author/editors using AI to assist with the grunt work of writing a high-quality draft. Things like finding possible references come to mind. That may already be happening and nobody knows it because the only "difference" AI is making is that established author/editors with reputations for producing high quality content already are doing are more productive than they were before.
Next, you will see AI doing routine things like proofing existing articles and generating lists of recommended changes. For example, flagging possible mis-spellings, possible incorrect links, possible inconsistencies, possible "not supported by cited reference" issues, and the like. The output of this work won't ever result in an "AI edit."
You may also see AI replacing "non-AI" machine translations once AI gets to the point that it's consistently better than conventional "non-AI" machine-translation.
Eventually AI will be as good at generating new content as your middle-of-the-road "author/editor." At that point, either Wikipedia will have to allow AI article creation or some other organization will create a new "AI written encyclopedia" full of quickly-written high-quality (nearly zero hallucinations, etc.) content that people will flock to. Or maybe the entire idea of an encyclopedia will be replaced by "on-demand content generation" similar to the "AI results" we see today in some search engines, but better quality than what we have today.
I remember a few years ago someone predicting the "end of Wikipedia as we know it" claiming that Wikipedia may still exist, but it will have a lot fewer human visitors since people searching for information will get information from search engines that will summarize Wikipedia content for them. If that day isn't here yet, it will be here in a few short years.
If I were a health-care worker in an area where Ebola outbreaks happened from time to time, I would want this "4-day vaccine" available. As soon as anyone in my community had a suspected case, I'd take the shot and keep taking it until all local cases were over with.
I'd still do all of the standard Ebola-prevention and -mitigation strategies. This would be just one more layer.
I wouldn't want to be in a constant state of inflammation, but if I were a health-care worker getting a shot every few days for a period of a few weeks during an acute epidemic/pandemic would be very useful.
... of headlines is wrong here.
PDP-11.
"it doesn't seem had" should read "it doesn't seem hard."
The main barrier to AI in tax is that most businesses will not and cannot give the AI access to its ERP system to train with.
This shouldn't be hard in principle.
If a company wants to train an AI using its ERP data, clone the AI first, then put the cloned copy under the control of the company that owns the ERP data.
In practice, this may be expensive, but in principle, it doesn't seem had.
Getting it fixed is about as possible as making water not wet under standard conditions.
Any accounting firm worth their high fee can define "standard conditions" in a way that will make the client happy. If the client wants water that is not wet under "standard conditions," they can make it happen.
On a somewhat-related note, many cities had multiple mail deliveries per day to residences in the first half of the 20th century and for businesses well into the last half.
Daily Deliveries Down to One - 1950
Also, A Short History of Home Mail Delivery (ignore the bit about Saturday delivery ending in 2013 in the USA, it didn't end).
In other words, if your computer is too old to run modern systems, LibreOffice is walking away.
Windows 10 has pretty much the same hardware requirements as Windows 7.
MacOS 15.6 (Sequoia) can run on Intel hardware as old as 2007 if you use something like OpenCore Legacy Patcher.
How many hardware guys does it take to change a light bulb? "Well the diagnostics say it's fine buddy, so it's a software problem."