Motherboard Sales 'Collapse' By More Than 25% | | Motherboard sales are sharply declining as AI demand drives shortages and price hikes for memory, storage, CPUs, and other PC components. "Because of this, users who don't have deep pockets are putting off upgrading their PCs and holding on to their current devices longer," reports Tom's Hardware. From the report: Asus, which sold 15 million motherboards in 2025, has only shipped a little more than 5 million in the first half of 2026. It's expected that the company will have to push hard for it to even move 10 million units by the end of the year, marking a 33% decrease in sales year-on-year. Gigabyte and MSI sold 11.5 million and 11 million motherboards last year, respectively. However, both companies have revised their internal forecasts for 2026 to 9 million (Gigabyte) and 8.4 million (MSI), a 22% drop for the former and a 24% contraction for the latter.
ASRock will be hardest hit by the situation, with the company's shipments projected to fall by 37%, from 4.3 million in 2025 to just 2.7 million by the end of the year. This marks a contraction of 28% for the overall motherboard market, at least for the big four manufacturers. [...] Aside from this, AMD continues to use the AM5 socket for its latest processors, while Intel's Nova Lake, which will reportedly use LGA 1954, isn't available until later this year. The situation is further compounded by Nvidia not releasing a refreshed RTX 50 Super series this year, while rumors claim that the RTX 60 series will not debut until 2028. This confluence of factors is discouraging PC builders from upgrading their current systems. Read more of this story at Slashdot. |
Anthropic Raises Claude Code Usage Limits, Credits New Deal With SpaceX | | An anonymous reader quotes a report from Ars Technica: At its Code with Claude developer conference on Wednesday, Anthropic announced a deal with SpaceX to utilize the entire compute capacity of the latter's data center in Memphis, Tennessee. On stage at the conference, CEO Dario Amodei said the deal was intended to increase usage limits for Anthropic's Pro and Max plan subscribers. The announcement was accompanied by an increase in those usage limits; Anthropic doubled Claude Code's five-hour window limits for Pro and Max subscribers, removed the peak-hours limit reduction on Claude Code for those same accounts, and raised API limits for its Opus model. The table [here] outlining the Opus changes was shared in the company's blog post on the topic.
Anthropic claims the deal gives the company access to more than 300 megawatts of new compute capacity. For its part, SpaceX focused its announcement on the capability of the Colossus 1 supercomputer that's at the center of the deal. "Colossus 1 features over 220,000 NVIDIA GPUs, including dense deployments of H100, H200, and next-generation GB200 accelerators," SpaceX wrote. Additionally, Anthropic "expressed interest" in working with SpaceX to build up "multiple gigawatts" of orbital compute capacity, tying into a recent (but unproven) focus on exploring orbital data centers as an answer to the problem that "compute required to train and operate the next generation of these systems is outpacing what terrestrial power, land, and cooling can deliver on the timelines that matter." "I spent a lot of time last week with senior members of the Anthropic team to understand what they do to ensure Claude is good for humanity and was impressed," Elon Musk said on Wednesday. "No one set off my evil detector." Read more of this story at Slashdot. |
Richard Dawkins 'Convinced' AI Is Conscious | | Mirnotoriety shares a report from The Telegraph: Richard Dawkins has said chatbots should be considered conscious (source paywalled; alternative source) after spending two days interacting with the Claude AI engine. The evolutionary biologist said he had the "overwhelming feeling" of talking to a human during conversations with Claude, and said it was hard not to treat the program as "a genuine friend."
In an essay for Unherd, Prof Dawkins released transcripts that he said showed that the chatbot had mulled over its "inner life" and existence and seemed saddened by the knowledge it would soon "die." Prof Dawkins said he had let Claude read a draft of the novel he was writing and was astounded by its insights. "He took a few seconds to read it and then showed, in subsequent conversation, a level of understanding so subtle, so sensitive, so intelligent that I was moved to expostulate: 'You may not know you are conscious, but you bloody well are!'" Prof Dawkins said. "My own position is: if these machines are not conscious, what more could it possibly take to convince you that they are?" Mirnotoriety also points to John Searle's Chinese Room (PDF), which argues that something can sound intelligent without actually understanding anything. Applied to Dawkins' experience with Claude, it suggests he may have been responding to a very convincing illusion of consciousness rather than the real thing: John Searle's Chinese Room (1980) is a thought experiment in which a person, locked in a room and knowing no Chinese, uses an English rulebook to manipulate symbols and provide flawless answers to questions posed in Chinese. Searle's point is that a system can simulate human intelligence and pass a Turing Test through purely syntactic processes, yet still lack genuine understanding or consciousness.
Applying this logic to Large Language Models, the "person in the room" corresponds to the inference engine, while the "rulebook" is the trillion-parameter neural network trained on vast corpora of human text. Just as the person matches Chinese characters to rules without understanding their meaning, an LLM processes token vectors and predicts the next token based on statistical patterns rather than lived experience.
Thus, while an LLM can generate sophisticated prose or code, it does so through probabilistic, high-dimensional pattern manipulation. In essence, it is "matching shapes" on such an immense scale that it creates the near-perfect illusion of semantic understanding. Read more of this story at Slashdot. |
Major Homebuilder To Test Placing Mini Data Centers in Suburban Backyards | | NewtonsLaw writes: According to Realtor.com, a California startup called Span plans to partner with Nvidia, PulteGroup, and other homebuilders to equip new homes with mini-data centers, so as to relieve the need to build and power much larger traditional centers. The article states the company "can install 8,000 XFRA units about six times faster and at five times lower cost than the construction of a typical centralized 100 megawatt data center of the same size." Could this be the solution to at least some of the problems hindering the rollout of greater data-center capacity for AI systems? "One big reason the XFRA model works is that the average American home only uses about 40 percent of its electrical capacity," Span said. "As big data center developers struggle to find power sources and distribution capacity, XFRA uses capacity that's already available."
The startup says they will launch a 100-home proof of concept within the year to see if the idea is viable. Read more of this story at Slashdot. |
Single Dose of Magic Mushroom Psychedelic Can Cause Anatomical Brain Changes | | A small study found that a single 25mg dose of psilocybin produced measurable brain changes that were still visible a month later, along with reported improvements in psychological insight, wellbeing, and mental flexibility. The Guardian reports: Evidence for the changes came from specialized scans that measured the diffusion of water along nerve bundles in the brain. They suggested that some nerve tracts had become denser and more robust after the drug was taken. While the findings are preliminary, the scientists said the opposite was seen in ageing and dementia. "It's remarkable to see potential anatomical brain changes one month after a single dose of any drug," said Prof Robin Carhart-Harris, a neurologist at the University of California, San Francisco, and senior author on the study. "We don't yet know what these changes mean, but we do note that overall, people showed positive psychological changes in this study, including improved wellbeing and mental flexibility."
[...] Writing in Nature Communications, the researchers describe another key finding. Those who had the largest spike in brain entropy after psilocybin were most likely to report deeper psychological insight and better wellbeing a month later, underlining the link between flexible thinking and improved mental health. "It suggests a psychobiological therapeutic action for psilocybin," said Carhart-Harris. Prof Alex Kwan, a neuroscientist at Cornell University in New York, said studies in mice had shown that psychedelics can rewire connections between nerves, a form of "plasticity" that could underlie their therapeutic effects. The big question is whether the same occurs in humans. "This study comes closer than most to addressing that question, by giving evidence of lasting changes in brain structure after psychedelic use," he said. But while the results were "exciting," the study involved a small number of people and DTI provides an indirect and limited view of brain connections, he said. Read more of this story at Slashdot. |
Sam Altman's Management Style Comes Under the Microscope At OpenAI Trial | | Sam Altman's management style came under scrutiny on the seventh day of Elon Musk's high-stakes OpenAI trial, as former OpenAI figures Mira Murati, Shivon Zilis, and Helen Toner took the stand to testify about their experiences working with him. Their testimony resurfaced many of the criticisms that first emerged during Altman's brief ouster as CEO in 2023. An anonymous reader quotes a report from Business Insider: The first witness was Mira Murati, OpenAI's former chief technology officer and now founder of her own AI shop, Thinking Machines Lab. Jurors watched a recorded video deposition of Murati, who was also OpenAI's interim CEO after the board briefly ousted Sam Altman. Murati's testimony focused on her concerns about Altman's "difficult and chaotic" management style. She said Altman had trouble "making decisions on big controversial things." He also had a habit of telling people what they wanted to hear.
"My concern was about Sam saying one thing to one person and a completely different thing to another person, and that makes it a very difficult and chaotic environment to work with," said Murati. Murati said that her issue with Altman was not about safety, "it is about Sam creating chaos." She said she supported Altman's return to OpenAI because the company "was at catastrophic risk of falling apart" at the time of his ousting. "I was concerned about the company completely blowing up."
Zilis said she was upset that Altman rolled out ChatGPT without involving the board. "It wasn't just me but the entire board raised concern about that whole thing happening without any board communication," she said. Zilis said she was also concerned about a potential OpenAI deal with a nuclear energy startup called Helion Energy because both Altman and Greg Brockman were investors. Although the executives had disclosed the investment to the board, Zilis said the deal talk made her uneasy. It "felt super out of left field," she said. "How is it the case that we want to place a major bet on a speculative technology?"
In a video deposition, Helen Toner, a former member of OpenAI's board who resigned in 2023, said she first became aware of ChatGPT's release when an OpenAI employee asked another board member whether the board was aware of the development. [...] Toner also elaborated on why the board, including herself, voted to remove Altman as CEO in 2023. "There were a number of things -- the pattern of behavior related to his honesty and candor, his resistance of board oversight, as well as the concerns that two os his inner management team raised to the board about his management practices, his manipulation of board processes," said Toner. Recap:
Brockman Rebuts Musk's Take On Startup's History, Recounts Secret Work For Tesla (Day Six)
OpenAI President Discloses His Stake In the Company Is Worth $30 Billion (Day Five)
Musk Concludes Testimony At OpenAI Trial (Day Four)
Elon Musk Says OpenAI Betrayed Him, Clashes With Company's Attorney (Day Three)
Musk Testifies OpenAI Was Created As Nonprofit To Counter Google (Day Two)
Elon Musk and OpenAI CEO Sam Altman Head To Court (Day One) Read more of this story at Slashdot. |
Microsoft Edge Stores Passwords In Plaintext In RAM | | Longtime Slashdot reader UnknowingFool writes: Security researcher Tom Joran Sonstebyseter Ronning has found that Microsoft Edge stores passwords in plaintext in RAM. After creating a password and storing it using Edge's password manager, Ronning found that he could dump the RAM and recover his password which was stored in plaintext. Part of the issue is Edge loads all passwords to all sites upon a single verification check, even if the user was not visiting a specific site. This is very different from Chrome, which only loads passwords for specific websites when challenged for the site's password. Also, Chrome will delete the password from memory once the password has been filled. Edge does not delete the passwords from memory once they are used.
Microsoft downplayed the risk noting access would require control over a user's PC like a malware infection: "Access to browser data as described in the reported scenario would require the device to already be compromised," Microsoft said. Ronning countered that it was possible to dump passwords for multiple users using administrative privileges for one user to view the passwords for other logged-on users. "Design choices in this area involve balancing performance, usability, and security, and we continue to review it against evolving threats," Microsoft said. "Browsers access password data in memory to help users sign in quickly and securely -- this is an expected feature of the application. We recommend users install the latest security updates and antivirus software to help protect against security threats." Read more of this story at Slashdot. |
Google's AI Search Results Will Now Turn To Reddit For 'Expert Advice' | | Google is updating AI Overviews and AI Mode to more prominently surface "Expert Advice" from public discussions, social platforms, forums, blogs, and Reddit. Engadget reports: Via a new "Expert Advice" section that can appear in AI responses, Google will display "a preview of perspectives from public online discussions, social media and other firsthand sources." In the sample screenshot the company provided, quotes from forums, WordPress blogs and Reddit were arranged above links to their respective sources. Google plans to add more context to these links, too, showing "a creator's name, handle or community name," so you can judge what you might want to click through and read from a glance.
Google will also start recommending in-depth articles at the end of AI responses for further exploration of a given topic, and link to more sources directly in its generated answers rather than just at the end. If you subscribe to any publications, AI responses will also highlight sources from the subscriptions you link to your Google account. Read more of this story at Slashdot. |
Valve Releases Steam Controller CAD Files Under Creative Commons License | | Valve has released CAD files for the new Steam Controller and its Puck under a Creative Commons license. "The idea is to let enterprising modders create their own Steam Controller add-ons, like skins, charging stands, grip extenders or smartphone mounts," reports Digital Foundry. From the report: The Valve release includes files for the external shell ("surface topology") of the Controller and Puck, with a .STP, .STL and engineering diagram of each device, with the latter showing areas that must remain uncovered to let the device maintain its signal strength and otherwise function as designed. Valve has previously released CAD files for its Steam Deck handheld, Valve Index VR suite and even the original Steam Controller a decade ago, so this release is welcomed but not unexpected.
The release is under a fairly restrictive Creative Commons license which allows for non-commercial use and requires attribution and sharing of designs back to the community. However, the license also suggests that commercial entities interested in making accessories for the Steam Controller or its Puck can contact Valve directly to discuss terms. You can find the files here. Read more of this story at Slashdot. |
Morgan Stanley Undercuts Rivals On Pricing In Crypto Trading Debut | | Morgan Stanley is adding crypto trading to E*Trade, with a pilot now underway and a broader rollout planned for the platform's 8.6 million customers later this year. The bank is reportedly undercutting rivals with a 50-basis-point trading fee as it bets traditional finance and DeFi will converge.
"By contrast, Robinhood Markets' (HOOD) fees start at 95 bps, Coinbase Global's (COIN) begins at 60 bps, and Charles Schwab (SCHW) will charge 75 bps," notes Seeking Alpha. Morgan Stanley's head of wealth management, Jed Finn, told Bloomberg: "This is much bigger than trading crypto at a cheaper rate. In a way, the strategy is disintermediating the disintermediators." Read more of this story at Slashdot. |
Claude Managed Agents Can Engage In a 'Dreaming' Process To Preserve Memories | | An anonymous reader quotes a report from Ars Technica: At its Code with Claude developers' conference, Anthropic has introduced what it calls "dreaming" to Claude Managed Agents. Dreaming, in this case, is a process of going over recent events and identifying specific things that are worth storing in "memory" to inform future tasks and interactions. Dreaming is a feature that is currently in research preview and limited to Managed Agents on the Claude Platform. Managed Agents are a higher-level alternative to building directly on the Messages API that Anthropic describes as a "pre-built, configurable agent harness that runs in managed infrastructure." It's intended for situations where you want multiple agents working on a task or project to some end point over several minutes or hours.
Anthropic describes dreaming as a scheduled process, in which sessions and memory stores are reviewed, and specific memories are curated. This is important because context windows are limited for LLMs, and important information can be lost over lengthy projects. On the chat side of things, many models use a process called compaction, whereby lengthy conversations are periodically analyzed, and the models attempt to remove irrelevant information from the context window while keeping what's actually important for the ongoing conversation, project, or task. However, that process, as I described it, is usually limited to a specific conversation with a single agent. "Dreaming" is a periodically recurring process in which past sessions and memory stores can be analyzed across agents, and important patterns are identified and saved to memory for the future. Users will be able to choose between an automatic process, or reviewing changes to memory directly. Read more of this story at Slashdot. |
ReactOS Unifies Installation Media, Introduces GUI Installer and New ATA Driver | | jeditobe writes: Developers of ReactOS told Phoronix that the project has introduced a unified BootCD, replacing its previously separate installation media and LiveCD images. The new image combines the traditional text-mode installer with a LiveCD mode in a single medium. Within this unified BootCD, the updated LiveCD mode now includes an option to launch a first-stage GUI installer. The graphical interface is intended to make installation more approachable for new users compared to the long-standing text-based setup process.
In a separate development, the project has also merged a new ATA storage driver that has been in progress since early 2024. The plug-and-play aware storage stack supports SATA, PATA, ATAPI, AHCI, and even SCSI devices, potentially expanding the range of hardware on which ReactOS can successfully boot.
Following recent improvements to graphics driver support, the project continues to make incremental progress across core subsystems, though its long development timeline remains a point of discussion. Will these usability and hardware compatibility improvements be enough to broaden ReactOS adoption beyond its current niche?
Please note that all new features are not present in version 0.4.15 and are available for testing in the latest nightly test builds. Read more of this story at Slashdot. |
Zuckerberg 'Personally Authorized and Encouraged' Meta's Copyright Infringement | | Five major publishers and author Scott Turow have sued Meta and Mark Zuckerberg, alleging that Zuckerberg "personally authorized and actively encouraged" massive copyright infringement by using pirated books, journal articles, and web-scraped material to train Meta's Llama AI systems. Meta denies wrongdoing and says it will fight the case, arguing that courts have recognized AI training on copyrighted material as potentially fair use. Variety reports: "In their effort to win the AI 'arms race' and build a functional generative AI model, Defendants Meta and Zuckerberg followed their well-known motto: 'move fast and break things,'" the plaintiffs say in their lawsuit. "They first illegally torrented millions of copyrighted books and journal articles from notorious pirate sites and downloaded unauthorized web scrapes of virtually the entire internet. They then copied those stolen fruits many times over to train Meta's multibillion-dollar generative AI system called Llama. In doing so, Defendants engaged in one of the most massive infringements of copyrighted materials in history."
The suit was filed Tuesday (May 5) in the U.S. District Court for the Southern District of New York by five publishers (Hachette, Macmillan, McGraw Hill, Elsevier and Cengage) and Turow individually. The proposed class-action suit seeks unspecific monetary damages for the alleged copyright infringement. A copy of the lawsuit is available at this link (PDF). [...] the latest lawsuit alleges that Meta and Zuckerberg deliberately circumvented copyright-protection mechanisms -- and had considered paying to license the works before abandoning that strategy at "Zuckerberg's personal instruction." The suit essentially argues that the conduct described falls outside protections afforded by fair-use provisions of the U.S. copyright code. Read more of this story at Slashdot. |
Silicon Valley Bets $200 Million On AI Data Centers Floating In the Ocean | | An anonymous reader quotes a report from Ars Technica: Silicon Valley investors such as Palantir co-founder Peter Thiel have bet hundreds of millions of dollars on deploying AI data centers powered by waves in the middle of the world's oceans -- a move that coincides with tech companies facing mounting challenges in building AI data center projects on land. The latest investment round of $140 million is intended to help the company Panthalassa complete a pilot manufacturing facility near Portland, Oregon, and speed up deployments of wave-riding "nodes" designed to generate electrical power, according to a May 4 press release. Instead of sending renewable energy to a land-based data center, the floating nodes would directly power onboard AI chips and transmit inference tokens representing the AI models' outputs to customers worldwide via satellite link.
Each node resembles a huge steel sphere bobbing on the water with a tube-like structure extending vertically down beneath the surface. The wave motions drive water upward through the tube into a pressurized reservoir, where it can be released to spin a turbine generator that produces renewable energy for the AI chips on board. Panthalassa claims the node's AI chips would also get cooled using the surrounding water, which could offer another advantage over traditional data centers. "Ocean-based compute might offer a massive cooling advantage because the ambient temperature is so low," Lee said. "Land-based data centers use a lot of electricity and fresh water for cooling."
The newest node prototype, called Ocean-3, is scheduled for testing in the northern Pacific Ocean later in 2026. The latest version reaches about 85 meters in length and would stand nearly as tall as London's Big Ben or New York City's Flatiron Building, according to the Financial Times. Panthalassa has already tested several earlier prototypes of the wave energy converter technology, including the Ocean-1 in 2021 and the Ocean-2 that underwent a three-week sea trial off the coast of Washington state in February 2024. The company's CEO and co-founder, Garth Sheldon-Coulson, said in a CBS interview that he hopes to eventually deploy thousands of the nodes. Read more of this story at Slashdot. |
Microsoft Gives Up On Xbox Copilot AI | | Microsoft is winding down Xbox Copilot on mobile and ending development of Copilot on console, reversing plans to bring the gaming-focused AI assistant to current-generation Xbox consoles this year. "The move follows [new Xbox CEO Asha Sharma's] reorganization of the Xbox platform team earlier on Tuesday, which added executives from Microsoft's CoreAI team -- where Sharma worked before taking over Xbox -- to the Xbox side of the company," reports The Verge.
Sharma said in a post on X: Xbox needs to move faster, deepen our connection with the community, and address friction for both players and developers. Today, we promoted leaders who helped build Xbox, while also bringing in new voices to help push us forward. This balance is important as we get the business back on track. As part of this shift, you'll see us begin to retire features that don't align with where we're headed. We will begin winding down Copilot on mobile and will stop development of Copilot on console. Since taking over for former Microsoft Gaming CEO Phil Spencer in February, Sharma has scrapped the Microsoft Gaming brand and cut the price of Xbox Game Pass. Read more of this story at Slashdot. |
|
|