RawPixel

A Wall Street Journal software engineer has called for artificial intelligence companies to allocate one penny to explaining their models for every dollar spent on computing power, arguing users need transparency when handing over creative and cognitive labour to machines.

John West, who builds and fine-tunes AI models for newsroom work, says current AI packaging hides crucial information from users. At the same time, companies race toward systems capable of performing the most economically valuable work better than humans, reports The Wall Street Journal.

West suggested that benefits could come from using publishing training data to create programming playgrounds where users can adjust parameters and explore vector space alongside their machines. He argues this transparency would make AI feel as exhilarating for general users as it does for engineers who can access the technology’s inner workings.

The engineer contrasts his experience working directly with model parameters, training data and vector mathematics with how most people interact with opaque chatbots. He argues this knowledge gap matters more than understanding household appliances because the stakes extend beyond clean clothes to fundamental human work in reading, writing, art and science.

West demonstrated his point by building a small language model trained on Herman Melville’s “Bartleby the Scrivener” using 140-character text chunks. The model learned spacing after 50 training passes, correct spelling after 500 passes, and nearly produced Bartleby’s famous refusal “I would prefer not to” after around 5,000 passes, instead declaring “I would prefer to yes.”

Most AI companies decline to reveal training data sources, making it difficult for users to understand why models behave as they do. West explains that large language models transform words into vectors across hundreds of dimensions, with meaning that can be added, averaged, divided and subtracted like mathematics.

The critique comes as companies race toward artificial general intelligence, which OpenAI chief executive Sam Altman describes as highly autonomous systems outperforming humans at most economically valuable work. West questions whether intelligence should be investigated before attempting to manufacture it, noting Altman’s revenue-focused definition would classify washing machines as artificial specific intelligence.

West argues that understanding how AI works matters more than understanding washing machines because of what humans are entrusting to these systems. He wrote that many things in daily life work without users needing to know precisely how, but when we turn over creative and cognitive labour to machines instead of cleaning muddy pants, the stakes are much higher.

Companies are racing to integrate AI technology into products that touch central human endeavours, including reading, writing, art, and science. West suggests greater transparency would help users engage with large language models in deeper and more meaningful ways.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Humans beat AI at spotting deepfake videos but fail entirely with photos

As artificial intelligence gets better at generating fake imagery, a new study…

Meet the indestructible robots born from artificial intelligence that refuse to die

Engineers have unleashed a new breed of artificial intelligence-designed robots that can…

Charities warned that using AI images destroys public trust and empathy

Tempted by the promise of faster, cheaper campaign materials, many major charities…

Why the world’s ship captains are terrified of autonomous vessels

A new wave of autonomous ferries is set to hit the water…

Fighting global fraud with AI evidence chains instead of mass surveillance

Rather than expanding government oversight into our private lives, policymakers must harness…

The death of original thought as chatbots force humanity to conform

It’s not just your syntax that artificial intelligence is hacking — it’s…