One of the leading companies offering alternatives to lithium batteries for the grid just got a from the US Department of Energy. Eos Energy makes zinc-halide batteries, which the firm hopes could one day be used to store renewable energy at a lower cost than is possible with existing lithium-ion batteries. The loan is the first “conditional commitment” from the DOE’s Loan Program Office to a battery maker focused on alternatives to lithium-ion cells. The agency has previously funded , , and other climate technologies like Today, lithium-ion batteries are the default choice to store energy in devices from laptops to electric vehicles. The cost of these kinds of batteries has , but there’s a growing need for even cheaper options. Solar panels and wind turbines only produce energy intermittently, and to keep an electrical grid powered by these renewable sources humming around the clock, grid operators need ways to store that energy until it is needed. The US grid alone may need between 225 and 460 gigawatts of long-duration energy storage capacity . New batteries, like the zinc-based technology Eos hopes to commercialize, could store electricity for hours or even days at low cost. These and other alternative storage systems could be key to building a consistent supply of electricity for the grid and cutting the climate impacts of power generation around the world. In Eos’s batteries, the cathode is not made from the familiar mixture of lithium and other metals. Instead, the primary ingredient is zinc, which ranks as the . Zinc-based batteries aren’t a new invention—researchers at Exxon patented zinc-bromine flow batteries in the 1970s—but Eos has developed and altered the technology over the last decade. Zinc-halide batteries have a few potential benefits over lithium-ion options, says , vice president of research and development at Eos. “It’s a fundamentally different way to design a battery, really, from the ground up,” he says. Eos’s batteries use a water-based electrolyte (the liquid that moves charge around in a battery) instead of organic solvent, which makes them more stable and means they won’t catch fire, Richey says. The company’s batteries are also designed to have a longer lifetime than lithium-ion cells—about 20 years as opposed to 10 to 15—and don’t require as many safety measures, like active temperature control. There are some technical challenges that zinc-based and other alternative batteries will need to overcome to make it to the grid, says , technical principal at Volta Energy Technologies, a venture capital firm focused on energy storage technology. Zinc batteries have a relatively low efficiency—meaning more energy will be lost during charging and discharging than happens in lithium-ion cells. Zinc-halide batteries can also fall victim to unwanted chemical reactions that may shorten the batteries’ lifetime if they’re not managed. Those technical challenges are largely addressable, Rodby says. The bigger challenge for Eos and other makers of alternative batteries will be manufacturing at large scales and cutting costs down. “That’s what’s challenging here,” she says. “You have by definition a low-cost product and a low-cost market.” Batteries for grid storage need to get cheap quickly, and one of the major pathways is to make a lot of them. Eos currently operates a semi-automated factory in Pennsylvania with a maximum production of about 540 megawatt-hours annually (if those were lithium-ion batteries, it would be enough to power about 7,000 average US electric vehicles), though the facility doesn’t currently produce at its full capacity. The loan from the DOE is “big news,” says Eos CFO . The company has been working on securing the funding for two years, and it will give the company “much-needed capital” to build its manufacturing capacity. Funding from the DOE will support up to four additional, fully automated lines in the existing factory. Altogether, the four lines could produce eight gigawatt-hours’ worth of batteries annually by 2026—enough to meet the daily needs of up to 130,000 homes. The DOE loan is a conditional commitment, and Eos will need to tick a few boxes to receive the funding. That includes reaching technical, commercial, and financial milestones, Kroeker says. Many alternative battery chemistries have struggled to transition from working samples in the lab and small manufacturing runs to large-scale commercial production. Not only that, but issues securing funding and problems lining up buyers have taken down in just the past decade. It can be difficult to bring alternatives to the market in energy storage, Kroeker says, though he sees this as the right time for new battery chemistries to make a dent. As renewables are rushing onto the grid, there’s a much higher need for large-scale energy storage than there was a decade ago. There’s also new support in place, like that make the business case for new batteries more favorable. “I think we’ve got a once-in-a-generation opportunity now to make a game-changing impact in our energy transition,” he says.
Tech
This is today’s edition of , our weekday newsletter that provides a daily dose of what’s going on in the world of technology. You need to talk to your kid about AI. Here are 6 things you should say. In the past year, kids, teachers, and parents have had a crash course in artificial intelligence, thanks to the wildly popular AI chatbot ChatGPT. In a knee-jerk reaction, some schools banned the technology—only to cancel the ban months later. Now that many adults have caught up with what ChatGPT is, schools have started exploring ways to use AI systems to teach kids important lessons on critical thinking. At the start of the new school year, here are MIT Technology Review’s six essential tips for how to get started on giving your kid an AI education. . —Rhiannon Williams & Melissa Heikkilä My colleague Will Douglas Heaven wrote about how AI can be used in schools for our recent Education issue. You can read that piece . Chinese AI chatbots want to be your emotional support Last week, Baidu became the first Chinese company to roll out its large language model—called Ernie Bot—to the general public, following regulatory approval from the Chinese government.Since then, four more Chinese companies have also made their LLM chatbot products broadly available, while more experienced players, like Alibaba and iFlytek, are still waiting for the clearance.One thing that Zeyi Yang, our China reporter, noticed was how the Chinese AI bots are used to offer emotional support compared to their Western counterparts. Given that chatbots are a novelty right now, it raises questions about how the companies are hoping to keep users engaged once that initial excitement has worn off. . This story originally appeared in China Report, Zeyi’s weekly newsletter giving you the inside track on all things happening in tech in China. to receive it in your inbox every Tuesday. The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 China’s chips are far more advanced than we realizedHuawei’s latest phone has US officials wondering how effective their sanctions have really been. ( $)+ It suggests China’s domestic chip tech is coming on in leaps and bounds. ()+ Japan was once a chipmaking giant. What happened? ( $) + The US-China chip war is still escalating. () 2 Meta’s AI teams are in turmoil Internal groups are scrapping over the company’s computing resources. ( $)+ Meta’s latest AI model is free for all. () 3 Conspiracy theorists have rounded on digital cash If authorities can’t counter those claims, digital currencies are dead in the water. ( $)+ Is the digital dollar dead? ()+ What’s next for China’s digital yuan? () 4 Lawyers are the real winners of the crypto crashSomeone has to represent all those bankrupt companies. ( $)+ Sam Bankman-Fried is adjusting to life behind bars ( $) 5 Renting an EV is a minefieldCollecting a hire car that’s only half charged is far from ideal. ( $)+ BYD, China’s biggest EV company, is eyeing an overseas expansion. ()+ How new batteries could help your EV charge faster. () 6 US immigration used fake social media profiles to spy on targetsEven though aliases are against many platforms’ terms of service. () 7 The internet has normalized laughing at death The creepy groups are a digital symbol of human cruelty. ( $) 8 New York is purging thousands of Airbnbs A new law has made it essentially impossible for the company to operate in the city. ( $)+ And hosts are far from happy about it. ( $) 9 Men are already rating AI-generated women’s hotnessIn another bleak demonstration of how AI models can perpetuate harmful stereotypes. ()+ Ads for AI sex workers are rife across social media. () 10 Meet the young activists fighting for kids’ rights onlineThey’re demanding a say in the rules that affect their lives. ( $) Quote of the day “It wasn’t totally crazy. It was only moderately crazy.” —Ilya Sutskever, co-founder of OpenAI, reflects on the company’s early desire to chase the theoretical goal of artificial general intelligence, according to . The big story Marseille’s battle against the surveillance state June 2022Across the world, video cameras have become an accepted feature of urban life. Many cities in China now have dense networks of them, and London and New Delhi aren’t far behind. Now France is playing catch-up. Concerns have been raised throughout the country. But the surveillance rollout has met special resistance in Marseille, France’s second-biggest city. It’s unsurprising, perhaps, that activists are fighting back against the cameras, highlighting the surveillance system’s overreach and underperformance. But are they succeeding? . —Fleur Macdonald We can still have nice things A place for comfort, fun and distraction in these weird times. (Got any ideas? Drop me a line or .) + How’d ya like dem ? Quite a lot, actually.+ Why keeping isn’t as crazy as it sounds.+ There’s no single explanation for why we get .+ This couldn’t be cuter.+ Here’s how to get your steak .
This story first appeared in China Report, MIT Technology Review’s newsletter about technology developments in China. to receive it in your inbox every Tuesday. Chinese ChatGPT-like bots are having a moment right now. As I last week, Baidu became the first Chinese tech company to roll out its large language model—called Ernie Bot—to the general public, following a regulatory approval from the Chinese government. Previously, access required an application or was limited to corporate clients. You can read more about the news . I have to admit the Chinese public has reacted more passionately than I had expected. According to Baidu, the Ernie Bot mobile app reached 1 million users in the 19 hours following the announcement, and the model responded to more than 33.42 million user questions in 24 hours, averaging 23,000 questions per minute. Since then, four more Chinese companies—the facial-recognition giant SenseTime and three young startups, Zhipu AI, Baichuan AI, and MiniMax—have also made their LLM chatbot products broadly available. But some more experienced players, like Alibaba and iFlytek, are still waiting for the clearance. Like many others, I downloaded the Ernie Bot app last week to try it out. I was curious to find out how it’s different from its predecessors like ChatGPT. What I noticed first was that Ernie Bot does a lot more hand-holding. Unlike ChatGPT’s public app or website, which is essentially just a chat box, Baidu’s app has a lot more features that are designed to onboard and engage new users. Under Ernie Bot’s chat box, there’s an endless list of prompt suggestions—like “Come up with a name for a baby” and “Generating a work report.” There’s another tab called “Discovery” that displays over 190 pre-selected topics, including gamified challenges (“Convince the AI boss to raise my salary”) and customized chatting scenarios (“Compliment me”). It seems to me that a major challenge for Chinese AI companies is that now, with government approval to open up to the public, they actually need to earn users and keep them interested. To many people, chatbots are a novelty right now. But that novelty will eventually wear off, and the apps need to make sure people have other reasons to stay. One clever thing Baidu has done is to include a tab for user-generated content in the app. In the community forum, I can see the questions other users have asked the app, as well as the text and image responses they got. Some of them are on point and fun, while others are way off base, but I can see how this inspires users to try to input prompts themselves and work to improve the answers. Left: a successful generation from the prompt “Pikachu wearing sunglasses and smoking cigars.” Right: the Ernie Bot failed to generate an image reflecting the literal or figurative meaning of 狗尾续貂, “To join a dog’s tail to a sable coat,” which is a Chinese idiom for a disappointing sequel to a fine work. Another feature that caught my attention was Ernie Bot’s efforts to introduce role-playing. One of the top categories on the “Discovery” page asks the chatbot to respond in the voice of pre-trained personas including Chinese historical figures like the ancient emperor Qin Shi Huang, living celebrities like Elon Musk, anime characters, and imaginary romantic partners. (I asked the Musk bot who it is; it answered: “I am Elon Musk, a passionate, focused, action-oriented, workaholic, dream-chaser, irritable, arrogant, harsh, stubborn, intelligent, emotionless, highly goal-oriented, highly stress-resistant, and quick-learner person.” I have to say they do not seem to be very well trained; “Qin Shi Huang” and “Elon Musk” both broke character very quickly when I asked them to comment on serious matters like the state of AI development in China. They just gave me bland, Wikipedia-style answers. But the most popular persona—already used by over 140,000 people, according to the app—is called “the considerate elder sister.” When I asked “her” what her persona is like, she answered that she’s gentle, mature, and good at listening to others. When I then asked who trained her persona, she responded that she was trained by “a group of professional psychology experts and artificial-intelligence developers” and “based on analysis of a large amount of language and emotional data.” “I won’t answer a question in a robotic way like ordinary AIs, but I will give you more considerate support by genuinely caring about your life and emotional needs,” she also told me. I’ve noticed that Chinese AI companies have a particular fondness for emotional-support AI. Xiaoice, one of the first Chinese AI assistants, made its name by . And another startup, Timedomain, when it shut down its AI boyfriend voice service. Baidu seems to be setting up Ernie Bot for the same kind of use. I’ll be watching this slice of the chatbot space grow with equal parts intrigue and anxiety. To me, it’s one of the most interesting possibilities for AI chatbots. But this is more challenging than writing code or answering math problems; it’s an entirely different task to ask them to provide emotional support, act like humans, and stay in character all the time. And if the companies do pull it off, there will be more risks to consider: What happens when humans actually build deep emotional connections with the AI? Would you ever want emotional support from an AI chatbot? Tell me your thoughts at zeyi@technologyreview.com. Catch up with China 1. The mysterious advanced chip in Huawei’s newly released smartphone has sparked many questions and much speculation about China’s progress in chip-making technology. () 2. Meta took down the largest Chinese social media influence campaign to date, which included over 7,000 Facebook accounts that bashed the US and other adversaries of China. Like its predecessors, the campaign failed to attract attention. () 3. Lawmakers across the US are concerned about the idea of China buying American farmland for espionage, but actual land purchase data from 2022 shows that very few deals were made by Chinese entities. () 4. A Chinese government official was sentenced to life in prison on charges of corruption, including fabricating a Bitcoin mining company’s electricity consumption data. () 5. Terry Gou, the billionaire founder of Foxconn, is running as an independent candidate in Taiwan’s 2024 presidential election. () 6. The average Chinese citizen’s life span is now 2.2 years longer thanks to the efforts in the past decade to clean up air pollution. () 7. Sinopec, the major Chinese oil company, predicts that gasoline demand in China will peak in 2023 because of the surging demand for electric vehicles. () 8. Chinese sextortion scammers are flooding Twitter comment sections and making the site almost unusable for Chinese speakers. () Lost in translation The favorite influencer of Chinese grandmas just got banned from social media. “Xiucai,” a 39-year-old man from Maozhou city, posted hundreds of videos on Douyin where he acts shy in China’s countryside, subtly flirts with the camera, and lip-synchs old songs. While the younger generations find these videos cringe-worthy, his look and style amassed him a large following among middle-aged and senior women. He attracted over 12 million followers in just over two years, over 70% of whom were female and nearly half older than 50. In May, a 72-year-old fan took a 1,000-mile solo train ride to Xiucai’s hometown just so she could meet him in real life. But last week, his account was suddenly banned from Douyin, which said Xiucai had violated some platform rules. Local taxation authorities in Maozhou said he was reported for tax evasion, but the investigation hasn’t concluded yet, . His disappearance made more young social media users aware of his cultish popularity. As those in China’s silver generation learn to use social media and even become addicted to it, they have also become a lucrative target for content creators. One more thing Forget about bubble tea. The trendiest drink in China this week is a latte mixed with baijiu, the potent Chinese liquor. the eccentric invention is a collaboration between Luckin Coffee, China’s largest cafe chain, and Kweichow Moutai, China’s most famous liquor brand. News of its release lit up Chinese social media because it sounds like an absolute abomination, but the very absurdity of the idea makes people want to know what it actually tastes like. Dear readers in China, if you’ve tried it, can you let me know what it was like? I need to know, for research reasons.
In the past year, kids, teachers, and parents have had a crash course in artificial intelligence, thanks to the wildly popular AI chatbot ChatGPT. In a knee-jerk reaction, some schools, such as the New York City public schools, banned the technology—only to months later. Now that many adults have caught up with the technology, schools have to use AI systems to teach kids important lessons on critical thinking. But it’s not just AI chatbots that kids are encountering in schools and in their daily lives. AI is increasingly everywhere—recommending shows to us on Netflix, helping Alexa answer our questions, powering your favorite interactive Snapchat filters and the way you unlock your smartphone. While some students will invariably be more interested in AI than others, understanding the fundamentals of how these systems work is becoming a basic form of literacy—something everyone who finishes high school should know, says Regina Barzilay, a professor at MIT and a faculty lead for AI at the MIT Jameel Clinic. The clinic recently ran a summer program for 51 high school students interested in the use of AI in health care. Kids should be encouraged to be curious about the systems that play an increasingly prevalent role in our lives, she says. “Moving forward, it could create humongous disparities if only people who go to university and study data science and computer science understand how it works,” she adds. At the start of the new school year, here are MIT Technology Review’s six essential tips for how to get started on giving your kid an AI education. 1. Don’t forget: AI is not your friend Chatbots are built to do exactly that: chat. The friendly, conversational tone ChatGPT adopts when answering questions can make it easy for pupils to forget that they’re interacting with an AI system, not a trusted confidante. This could make people more likely to believe what these chatbots say, instead of treating their suggestions with skepticism. While chatbots are very good at sounding like a sympathetic human, they’re merely mimicking human speech from data scraped off the internet, says Helen Crompton, a professor at Old Dominion University who specializes in digital innovation in education. “We need to remind children , because it’s all going into a large database,” she says. Once your data is in the database, it becomes . It could be used to make technology companies more money without your consent, or it could even be extracted by hackers. 2. AI models are not replacements for search engines Large language models are only as good as the data they’ve been trained on. That means that while chatbots are adept at confidently answering questions with text that may seem plausible, not all the information they offer up will be . AI language models are also known to present falsehoods as facts. And depending on where that data was collected, they can perpetuate . Students should treat chatbots’ answers as they should any kind of information they encounter on the internet: critically. “These tools are not representative of everybody—what they tell us is based on what they’ve been trained on. Not everybody is on the internet, so they won’t be reflected,” says Victor Lee, an associate professor at Stanford Graduate School of Education who has created for high school curriculums. “Students should pause and reflect before we click, share, or repost and be more critical of what we’re seeing and believing, because a lot of it could be fake.”While it may be tempting to rely on chatbots to answer queries, they’re not a replacement for Google or other search engines, says David Smith, a professor of bioscience education at Sheffield Hallam University in the UK, who’s been preparing to help his students navigate the uses of AI in their own learning. Students shouldn’t accept everything large language models say as an undisputed fact, he says, adding: “Whatever answer it gives you, you’re going to have to check it.” 3. Teachers might accuse you of using an AI when you haven’t One of the biggest challenges for teachers now that generative AI has reached the masses is working out when students have used AI to write their assignments. While plenty of companies have launched products that promise to has been written by a human or a machine, the problem is that , and it’s . There have been of cases where teachers assume an essay has been generated by AI when it actually hasn’t. Familiarizing yourself with your child’s school’s AI policies or AI disclosure processes (if any) and reminding your child of the importance of abiding by them is an important step, says Lee. If your child has been wrongly accused of using AI in an assignment, remember to stay calm, says Crompton. Don’t be afraid to challenge the decision and ask how it was made, and feel free to point to the record ChatGPT keeps of an individual user’s conversations if you need to prove your child didn’t lift material directly, she adds. 4. Recommender systems are designed to get you hooked and might show you bad stuff It’s important to understand and explain to kids how recommendation algorithms work, says Teemu Roos, a computer science professor at the University of Helsinki, who is developing a curriculum on AI for Finnish schools. Tech companies make money when people watch ads on their platforms. That’s why they have developed powerful AI algorithms that recommend content, such as videos on YouTube or TikTok, so that people will get hooked and to stay on the platform for as long as possible. The algorithms track and closely measure what kinds of videos people watch, and then recommend similar videos. The more cat videos you watch, for example, the more likely the algorithm is to think you will want to see more cat videos. These services have a tendency to guide users to harmful content like misinformation, Roos adds. This is because people tend to linger on content that is weird or shocking, such as misinformation about health, or extreme political ideologies. It’s very easy to get sent down a rabbit hole or stuck in a loop, so it’s a good idea not to believe everything you see online. You should double-check information from other reliable sources too. 5. Remember to use AI safely and responsibly Generative AI isn’t just limited to text: there are plenty of free apps and web programs that can impose someone’s face onto someone else’s body within seconds. While today’s students are likely to have been warned about the dangers of sharing intimate images online, they should be equally wary of uploading friends’ faces into —particularly because this could have legal repercussions. For example, courts have found teens guilty of spreading child pornography for sending explicit material about other teens or even . “We have conversations with kids about responsible online behavior, both for their own safety and also to not harass, or doxx, or catfish anyone else, but we should also remind them of their own responsibilities,” says Lee. “Just as nasty rumors spread, you can imagine what happens when someone starts to circulate a fake image.” It also helps to provide children and teenagers with specific examples of the privacy or legal risks of using the internet rather than trying to talk to them about sweeping rules or guidelines, Lee points out. For instance, talking them through how AI face-editing apps could retain the pictures they upload, or pointing them to news stories about platforms being hacked, can make a bigger impression than general warnings to “be careful about your privacy,” he says. 6. Don’t miss out on what AI’s actually good at It’s not all doom and gloom, though. While many early discussions around AI in the classroom revolved around its potential as a , when it’s used intelligently, it can be an enormously helpful tool. Students who find themselves struggling to understand a tricky topic could ask ChatGPT to break it down for them step by step, or to rephrase it as a rap, or to take on the persona of an expert biology teacher to allow them to test their own knowledge. It’s also exceptionally good at quickly drawing up detailed tables to compare the relative pros and cons of certain colleges, for example, which would otherwise take hours to research and compile. Asking a chatbot for glossaries of difficult words, or to practice history questions ahead of a quiz, or to help a student evaluate answers after writing them, are other beneficial uses, points out Crompton. “So long as you remember the bias, the tendency toward , and the importance of digital literacy—if a student is using it in the right way, that’s great,” she says. “We’re just all figuring it out as we go.”
This is today’s edition of , our weekday newsletter that provides a daily dose of what’s going on in the world of technology. Coming soon: MIT Technology Review’s 15 Climate Tech Companies to Watch For decades, MIT Technology Review has published annual lists highlighting the advances redefining what technology can do and the brightest minds pushing their fields forward. This year, we’re launching a new list, recognizing companies making progress on one of society’s most pressing challenges: climate change. MIT Technology Review’s 15 Climate Tech Companies to Watch will highlight the startups and established businesses that our editors think could have the greatest potential to address the threats of global warming. And attendees of our upcoming ClimateTech conference will be the first to find out. . —James Temple ClimateTech is taking place at the MIT Media Lab on MIT’s campus in Cambridge, Massachusetts, on October 4-5. You can register for the event, either in-person or online, . We know remarkably little about how AI language models work AI language models are not humans, and yet we evaluate them as if they were, using tests like the bar exam or the United States Medical Licensing Examination. The models tend to do really well in these exams, probably because examples of such exams are abundant in the models’ training data. Now, a growing number of experts have called for these tests to be ditched, saying they boost AI hype and fuel the illusion that such AI models appear more capable than they actually are. These discussions (raised last week) highlight just how little we know about how AI language models work and why they generate the things they do—and why our tendency to anthropomorphize can be problematic. . Melissa’s story first appeared in The Algorithm, her weekly newsletter giving you the inside track on all things AI. to receive it in your inbox every Monday. The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 Elon Musk is suing the Anti-Defamation LeagueHe claims the organization is trying to kill X, blaming it for a 60% drop in advertising revenue. ()+ The ADL has tracked a rise in hate speech on X since Musk took over. ( $) 2 China is creating a state-backing chip fundIt’s part of the country’s plan to sidestep increasingly harsh sanctions from the US. ()+ The outlook for China’s economy isn’t too rosy right now. ( $)+ The US-China chip war is still escalating. () 3 India’s lunar mission is officially completeIts rover and lander has shut down—for now. ( $+ The rover even managed a small hop before entering sleep mode. ()+ What’s next for the moon. () 4 ‘Miracle cancer cures’ don’t come cheapThe high costs of personalized medicine mean many of the most vulnerable patients are priced out of life-saving treatment. ( $)+ Two sick children and a $1.5 million bill: One family’s race for a gene therapy cure. () 5 Investors are losing faith in SequoiaThe venture capital firm’s major shakeup has raised a lot of questions about its future. ( $) 6 Record numbers of Pakistan’s tech workers are leaving the countryTalented engineers are seeking new opportunities, away from home. () 7 Video games are becoming gentlerA new wave of gamers want to be soothed, not overstimulated. ( $) 8 Spotify’s podcasting empire is crumblingThe majority of its shows aren’t profitable, and competition is fierce. ( $)+ Bad news for white noise podcasts: ad payouts are being stopped. () 9 Who is tradwife content for, really?The young influencers espousing traditional family values are unlikely to do so forever. ( $) 10 AI wants to help us talk to the animals Wildlife is under threat. Trying to communicate with other species could help us protect them. ( $) Quote of the day “It’s the end of the month versus the end of the world.” —Nicolas Miailhe, co-founder of think tank the Future Society, points out the extreme disparity between camps of AI experts who can’t agree over how big a threat AI poses to humanity to . The big story The quest to learn if our brain’s mutations affect mental health August 2021Scientists have struggled in their search for specific genes behind most brain disorders, including autism and Alzheimer’s disease. Unlike problems with some other parts of our body, the vast majority of brain disorder presentations are not linked to an identifiable gene.But a University of California, San Diego study published in 2001 suggested a different path. What if it wasn’t a single faulty gene—or even a series of genes—that always caused cognitive issues? What if it could be the genetic differences between cells?The explanation had seemed far-fetched, but more researchers have begun to take it seriously. Scientists already knew that the 85 billion to 100 billion neurons in your brain work to some extent in concert—but what they want to know is whether there is a risk when some of those cells might be singing a different genetic tune. . —Roxanne Khamsi We can still have nice things A place for comfort, fun and distraction in these weird times. (Got any ideas? Drop me a line or .) + Has it really been 15 years since started doing cartwheels mid-performance on the Today show?+ I’m blessing your Tuesday with these adorable (meerkittens?)+ Japan’s are weird, wonderful, and everything inbetween.+ Instagram’s hottest faces wouldn’t be seen dead without a .+ A ? Perfection.
AI language models are not humans, and yet we evaluate them as if they were, using tests like the bar exam or the United States Medical Licensing Examination. The models tend to do really well in these exams, probably because examples of such exams are abundant in the models’ training data. Yet, as my colleague Will Douglas Heaven writes in , “some people are dazzled by what they see as glimmers of human-like intelligence; others aren’t convinced one bit.” A growing number of experts have called for these tests to be ditched, saying they boost AI hype and create “the illusion that [AI language models] have greater capabilities than what truly exists.” . What stood out to me in Will’s story is that we know remarkably little about how AI language models work and why they generate the things they do. With these tests, we’re trying to measure and glorify their “intelligence” based on their outputs, without fully understanding how they function under the hood. Other highlights: Our tendency to anthropomorphize makes this messy: “People have been giving human intelligence tests—IQ tests and so on—to machines since the very beginning of AI,” says Melanie Mitchell, an artificial-intelligence researcher at the Santa Fe Institute in New Mexico. “The issue throughout has been what it means when you test a machine like this. It doesn’t mean the same thing that it means for a human.” Kids vs GPT-3: Researchers at the University of California, Los Angeles gave GPT-3 a story about a magical genie transferring jewels between two bottles and then asked it how to transfer gumballs from one bowl to another, using objects such as a posterboard and a cardboard tube. The idea is that the story hints at ways to solve the problem. GPT-3 proposed elaborate but mechanically nonsensical solutions. “This is the sort of thing that children can easily solve,” says Taylor Webb, one of the researchers. AI language models are not humans: “With large language models producing text that seems so human-like, it is tempting to assume that human psychology tests will be useful for evaluating them. But that’s not true: human psychology tests rely on many assumptions that may not hold for large language models,” says Laura Weidinger, a senior research scientist at Google DeepMind. Lessons from the animal kingdom: Lucy Cheke, a psychologist at the University of Cambridge, UK, suggests AI researchers could adapt techniques used to study animals, which have been developed to avoid jumping to conclusions based on human bias. Nobody knows how language models work: “I think that the fundamental problem is that we keep focusing on test results rather than how you pass the tests,” says Tomer Ullman, a cognitive scientist at Harvard University. . Deeper Learning Google DeepMind has launched a watermarking tool for AI-generated images Google DeepMind has launched a new watermarking tool that labels whether images have been generated with AI. The tool, called SynthID, will initially be available only to users of Google’s AI image generator Imagen. Users will be able to generate images and then choose whether to add a watermark or not. The hope is that it could help people tell when AI-generated content is being passed off as real, or protect copyright. Baby steps: Google DeepMind is now the first Big Tech company to publicly launch such a tool, following a voluntary pledge with the White House to develop responsible AI. Watermarking—a technique where you hide a signal in a piece of text or an image to identify it as AI-generated—has become one of the most popular ideas proposed to curb such harms. It’s a good start, but watermarks alone won’t create more trust online. Bits and Bytes Chinese ChatGPT alternatives just got approved for the general publicBaidu, one of China’s leading artificial-intelligence companies, has announced it will open up access to its ChatGPT-like large language model, Ernie Bot, to the general public. Our reporter Zeyi Yang looks at what this means for Chinese internet users. () Brain implants helped create a digital avatar of a stroke survivor’s faceIncredible news. Two papers in Nature show major advancements in the effort to translate brain activity into speech. Researchers managed to help women who had lost their ability to speak communicate again with the help of a brain implant, AI algorithms and digital avatars. () Inside the AI porn marketplace where everything and everyone is for sale This was an excellent investigation looking at how the generative AI boom has created a seedy marketplace for deepfake porn. Completely predictable and frustrating how little we have done to prevent real-life harms like nonconsensual deepfake pornogrpahy. () An army of overseas workers in “digital sweatshops” power the AI boomMillions of people working in the Philippines work as data annotators for data company Scale AI. But as this investigation into the questionable labor conditions shows, many workers are earning below the minimum wage and have had payments delayed, reduced or canceled.() The tropical Island with the hot domain nameLol. The AI boom has meant Anguilla has hit the jackpot with its .ai domain name. The country is expected to make millions this year from companies wanting the buzzy domain name. () P.S. We’re hiring! MIT Technology Review is looking for an ambitious AI reporter to join our team with an emphasis on the intersection of hardware and AI. This position is based in Cambridge, Massachusetts. Sounds like you, or someone you know? .
For decades, MIT Technology Review has published annual lists highlighting the advances redefining and the pushing their fields forward. This year, we’re launching a new list, recognizing companies making progress on one of society’s most pressing challenges: climate change. MIT Technology Review’s 15 Climate Tech Companies to Watch will highlight startups and established businesses that our editors think could have the greatest potential to substantially reduce greenhouse-gas emissions or otherwise address the threats of global warming. Attendees of the upcoming will be the first to learn the names of the selected companies, and founders or executives from several will appear on stage at the event. The conference will be held at the MIT Media Lab on MIT’s campus in Cambridge, Massachusetts, on October 4-5. . MIT Technology Review’s climate team consulted dozens of industry experts, academic sources, and investors to come up with a long list of nominees, representing a broad array of climate technologies. From there, the editors worked to narrow down the list to 15 companies whose technical advances and track records in implementing solutions give them a real shot at reducing emissions or easing the harms climate change could cause. We do not profess to be soothsayers. Businesses fail for all sorts of reasons, and some of these may. But all of them are pursuing paths worth exploring as the world races to develop cleaner, better ways of generating energy, producing food, and moving things and people around the globe. We’re confident we’ve picked a list of companies that could really help to combat the rising dangers before us. We’re excited to share the winners on October 4. And we hope you can to be among the first to hear our selections and share your feedback.
This is today’s edition of , our weekday newsletter that provides a daily dose of what’s going on in the world of technology. How one elite university is approaching ChatGPT this school year For many people, the start of September marks the real beginning of the year. Back-to-school season always feels like a reset moment. However, the big topic this time around seems to be the same thing that defined the end of last year: ChatGPT and other large language models.Last winter and spring brought so many headlines about AI in the classroom, with some panicked schools going as far as to ban ChatGPT altogether. Now, with the summer months having offered a bit of time for reflection, some schools seem to be reconsidering their approach. Tate Ryan-Mosley, our senior tech policy reporter, spoke to the associate provost at Yale University to find out why the prestigious school never considered banning ChatGPT—and instead wants to work with it. . Tate’s story is from The Technocrat, her weekly newsletter covering tech policy and power. to receive it in your inbox every Friday. If you’re interested in reading more about AI’s effect on education, why not check out: + ChatGPT is going to change education, not destroy it. The narrative around cheating students doesn’t tell the whole story. Meet the teachers who think generative AI could actually make learning better. . + Read why a high school senior believes that for the better. + How AI is helping historians better understand our past. The historians of tomorrow are using computer science to analyze how people lived centuries ago. . The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 The seemingly unstoppable rise of China’s EV makersThe country’s internet giants are becoming eclipsed by its ambitious car companies. ( $)+ Working out how to recycle those sizable batteries is still a struggle. ( $)+ The US secretary of commerce’s trip to China is seriously high stakes. ( $)+ China’s car companies are turning into tech companies. () 2 US intelligence is developing surveillance-equipped clothingSmart textiles, including underwear, could capture vast swathes of data for officials. ()+ Home Office officials in the UK lobbied in favor of facial recognition. () 3 India’s intense tech training schools are breeding toxic cultures But the scandal-stricken schools are still seen as the best path to a high-flying career. ( $) 4 Organizations are struggling to fight an influx of cyber crimeThere just aren’t enough skilled cyber security workers to defend against hackers. ( $) 5 Kiwi Farms just won’t dieDespite transgender activists’ efforts to keep its hateful campaigns offline. ( $) 6 The tricky ethics of CRISPRJust because we can edit genes, doesn’t mean we should. ( $)+ The creator of the CRISPR babies was released from a Chinese prison last year. () 7 This startup is training ultra-Orthodox Jews for hi-tech careersHaredi men are learning how to use computers and programming languages for the first time. () 8 Silicon Valley’s latest obsession? TestosteroneFounders are fixated on the hormone’s global decline—and worrying about their own levels. ( $) 9 We’re bidding a fond farewell to Netflix’s DVDs While demand for the physical discs has dwindled, die-hard devotees are devastated. ( $) 10 Concerts are different nowYou can thank TikTok for all those outlandish outfits. ()+ A Montana official is hell-bent on banning TikTok. ( $)+ TikTok’s hyper-realistic beauty filters are here to stay. () Quote of the day “I think they’re a generation ahead of us.” —Renault CEO Luca de Meo reflects on China’s electric vehicle makers’ stranglehold on the industry, reports. The big story What to expect when you’re expecting an extra X or Y chromosome August 2022 Sex chromosome variations, in which people have a surplus or missing X or Y, occur in as many as one in 400 births. Yet the majority of people affected don’t even know they have them, because these conditions can fly under the radar. As more expectant parents opt for noninvasive prenatal testing in hopes of ruling out serious conditions, many of them are surprised to discover instead that their fetus has a far less severe—but far less well-known—condition. And because so many sex chromosome variations have historically gone undiagnosed, many ob-gyns are not familiar with these conditions, leaving families to navigate the unexpected news on their own. . —Bonnie Rochman We can still have nice things A place for comfort, fun and distraction in these weird times. (Got any ideas? Drop me a line or .) + Some of the creative ideas registered by inventors in the UK last year are truly off the wall—a path that , anyone?+ Congratulations to Tami Manis, who has the honor of owning the for a woman.+ Grilling is a science as well as an art.+ This steel drum cover of 50 Cent’s is everything I hoped for.+ Take a sneaky peek at this year’s mesmerizing shortlist.
This is today’s edition of , our weekday newsletter that provides a daily dose of what’s going on in the world of technology. A biotech company says it put dopamine-making cells into people’s brains The news: In an important test for stem-cell medicine, biotech company BlueRock Therapeutics says implants of lab-made neurons introduced into the brains of 12 people with Parkinson’s disease appear to be safe and may have reduced symptoms for some of them. How it works: The new cells produce the neurotransmitter dopamine, a shortage of which is what produces the devastating symptoms of Parkinson’s, including problems moving. The replacement neurons were manufactured using powerful stem cells originally sourced from a human embryo created using an in vitro fertilization procedure. Why it matters: The small-scale trial is one of the largest and most costly tests yet of embryonic-stem-cell technology, the controversial and much-hyped approach of using stem cells taken from IVF embryos to produce replacement tissue and body parts. . —Antonio Regalado Here’s why I am coining the term “embryo tech” Antonio, our senior biomedicine editor, has been following experiments using embryonic stem cells for quite some time. He has coined the term “embryo tech” for the powerful technology researchers can extract by studying them, which includes new ways of reproducing through IVF—and could even hold clues to real rejuvenation science. To read more about embryo tech’s exciting potential, check out the latest edition of , our weekly biotech newsletter. to receive it in your inbox every Thursday. The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 The US government has earmarked $12 billion to speed up the transition to EVsIt’ll incentivise existing automakers to refurbish their factories into EV production lines. ()+ Driving an EV is a real learning curve. ( $)+ Why getting more EVs on the road is all about charging. () 2 We still don’t know how effective geoengineering the climate could be And scientists are divided over whether it’s wasteful at best, dangerous at worst. ( $)+ A startup released particles into the atmosphere, in an effort to tweak the climate. () 3 Covid is on the rise againThe number of cases are creeping up around the world—but try not to panic. ( $)+ Covid hasn’t entirely gone away—here’s where we stand. () 4 Apple is dropping its iCloud photo-scanning toolThe controversial mechanism would create new opportunities for data thieves, the company has concluded. ( $) 5 India is launching a probe to study the sun Buoyed by the success of its recent lunar landing, the spacecraft is set to take off on Saturday. ( $)+ The lunar Chandrayaan-3 probe is capturing impressive new pictures. ()+ Scientists have solved a light-dimming space mystery. () 6 Generative AI is unlikely to interfere in major electionsWhile it has disruptive potential, panicking about it is unwarranted. ( $)+ Americans are worrying that AI could make their lives worse, however. ( $)+ Six ways that AI could change politics. () 7 We’ve never seen a year for hurricanes quite like thisThe combination of an El Niño year and extreme heat creates the perfect storm. ( $)+ Here’s what we know about hurricanes and climate change. () 8 An AI-powered drone beat champion human pilotsIt’s the first time an AI system has outperformed human pilots in a physical sport. ()+ New York police will use drones to surveil Labor Day parties. () 9 LinkedIn’s users are opening upAs other social media platforms falter, they’ve started oversharing on the professional network. ( $) 10 Brazil’s delivery workers are fighting back against rude customers By threatening to eat their food if they don’t comply. () Quote of the day “It could be a cliff we end up falling off, or a mountain we climb to discover a beautiful view.” —George Bamford, founder of luxury watch customizing company Bamford Watch Department, describes how he’s been dabbling with AI to visualize new timepieces to . The big story El Paso was “drought-proof.” Climate change is pushing its limits. December 2021 El Paso has long been a model for water conservation. It’s done all the right things—it’s launched programs to persuade residents to use less water and deployed technological systems, including desalination and wastewater recycling, to add to its water resources. A former president of the water utility once famously declared El Paso “drought-proof.” Now, though, even El Paso’s careful plans are being challenged by intense droughts. As the pressure ratchets up, El Paso, and places like it, force us to ask just how far adaptation can go. . —Casey Crownhart We can still have nice things A place for comfort, fun and distraction in these weird times. (Got any ideas? Drop me a line or .) + Wait a minute, that’s !+ A YouTube channel delving into is what the world needs right now.+ It’s time to plan an .+ David Tennant doing a dramatic reading of is a thing of beauty.+ How to make your own at home—it’s super quick.
This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, . This week, I published a story about the results of a study on Parkinson’s disease in which a biotech company transplanted dopamine-making neurons into people’s brains. (You can read the full story .) The reason I am following this experiment, and others like it, is that they are long-awaited tests of transplant tissue made from embryonic stem cells. Those are the sometimes controversial cells first plucked from human embryos left over from in vitro fertilization procedures 25 years ago. Their medical promise is they can turn into any other kind of cell. In some ways, stem cells are a huge disappointment. Despite their potential, scientists still haven’t crafted any approved medical treatment from them after all this time. The Parkinson’s study, run by the biotech company BlueRock, a division of Bayer, just passed phase 1, the earliest stage of safety testing. The researchers still don’t know whether the transplant works. I’m not sure how much money has been plowed into embryonic stem cells so far, but it’s definitely in the billions. And in many cases, the original proof of principle that cell transplants might work is actually decades old—like experiments from the 1990s showing that pancreas cells from cadavers, if transplanted, could treat diabetes. Cells derived from human cadavers, and sometimes from abortion tissue, make for an uneven product that’s hard to obtain. Today’s stem-cell companies aim instead to manufacture cells to precise specifications, increasing the chance they’ll succeed as real products. That actually isn’t so easy—and it’s a big part of the reason for the delay. “I can tell you why there’s nothing: it’s a manufacturing issue,” says Mark Kotter. He’s the founder of a startup company, Bit Bio, that is among those developing new ways to make stem cells do researchers’ bidding. While there aren’t any treatments built from embryonic stem cells yet, when I look around biology labs, these cells are everywhere. This summer, when I visited the busy cell culture room at the Whitehead Institute, on MIT’s campus, a postdoc named Julia Juong pulled out a plate of them and let me see their silvery outlines through a microscope. Juong, a promising young scientist, is also working on new ways to control embryonic stem cells. Incredibly, the cells I was looking at were descendants of the earliest supplies, dating back to 1998. One curious property of embryonic stem cells is that they are immortal; they keep dividing forever. “These are the originals,” Juong said. That reproducibility is part of why stem cells are technology, not just a science project. And what a cool technology it is. The internet has all the world’s information. A one-cell embryo has the information to make the whole human body. It’s what I have started to think of as “embryo tech.” I don’t mean what we do to embryos (like gene testing or even gene editing) but, instead, the powerful technology researchers can extract by studying them. Embryo tech includes stem cells and new ways of reproducing through IVF. It could even hold clues to real rejuvenation science. For instance, one lab in San Diego is using stem cells to grow brain organoids, a bundle of fetal-stage brain cells living in a petri dish. Scientists there plan to attach the organoid to a robot and learn to guide it through a maze. It sounds wild, but some researchers imagine that cell phones of the future could have biological components, even bits of brain, in them. Another recent example of embryo tech is in longevity science. Researchers now know how to turn any cell into a stem cell, by exposing it to what are called transcription factors. It means they don’t need embryos (with their ethical drawbacks) as the starting point. One hot idea in biotech is to give people controlled doses of these factors in order to actually rejuvenate body parts. Until recently, scientific dogma said human lives could only run in one direction: forward. But now the idea is to turn back the clock—by pushing your cells just a little way back in the direction of the embryo you once were. One company working on the idea is Turn Bio, which thinks it can inject the factors into people’s skin to get rid of wrinkles. Another company, , has raised $3 billion to pursue the deep scientific questions around this phenomenon. Finally, another cool discovery is that given the right cues, stem cells will try to self-organize into shapes that look like embryos. These entities, called synthetic embryos, or embryo models, are going to be useful in research, including studies aimed at developing new contraceptives. They are also a dazzling demonstration that any cell, even a bit of skin, may have the intrinsic capacity to create an entirely new person. All these, to my mind, are examples of embryo tech. But by its nature, this type of technology can shock our sensibilities. It’s the old story: reproduction is something secret, even divine. And toying with the spark of life in the lab—well, that’s playing at Frankenstein, isn’t it? When reporting about the Parkinson’s treatment, I learned that Bayer is still anxious about embryo tech. Those at the company have been tripping over themselves to avoid saying “embryo” at all. That’s because Germany has a very strict law that forbids destruction of embryos for research within its borders. So what will embryo tech lead to next? I’m going to be tracking the progress of human embryonic stem cells, and I am working on a few big stories from the frontiers that I hope will shock, awe, and inspire. So stay tuned to MIT Technology Review. Read more from MIT Technology Review’s archive Earlier this month, we published . While there are no treatments yet, the number of experiments on patients is growing. That has some researchers predicting that the technology could deliver soon. It’s about time! And check out the of our magazine, where we our on the topic, from way back in 1998. Stem cells come from embryos, but surprisingly, the reverse also seems to be the case: given a few nudges, these potent cells will spontaneously form structures that look, and act, a lot like real embryos. I first reported on the appearance of “” in 2017 and the topic has only heated up since, as we recounted this June in about the wild race to improve the technology. Stem cells aren’t the only approach to regrowing organs. In fact, some of our body parts have the ability to regenerate on their own. Jessica Hamzelou reported on a biotech company that’s trying to make inside people’s lymph nodes. From around the web The overdose reversal drug Narcan is going over-the-counter. A two-pack of the nasal spray will cost $49.99 and should be at US pharmacies next week. The move comes as overdoses from the opioid fentanyl spiral out of control. () If you’re having surgery, you’ll probably be looking for the best surgeon you can get. That might be a woman, according to a study finding that patients of female surgeons are a lot less likely to die in the months following an operation than those operated on by men. The reasons for the effect are unknown. () New weight-loss drugs don’t just cause people to shed pounds. One of them, Wegovy, could also protect against heart failure. . I stopped worrying about covid-19 after my second vaccine shot and never looked back. But a new variant has some people asking, “How bad could BA.2.86 get?” ()
In an important test for stem-cell medicine, a biotech company says implants of lab-made neurons introduced into the brains of 12 people with Parkinson’s disease appear to be safe and may have reduced symptoms for some of them. The added cells should produce the neurotransmitter dopamine, a shortage of which is what produces the devastating symptoms of Parkinson’s, including problems moving. “The goal is that they form synapses and talk to other cells as if they were from the same person,” says Claire Henchcliffe, a neurologist at the University of California, Irvine, who is one of the leaders of the study. “What’s so interesting is that you can deliver these cells and they can start talking to the host.” The study is one of the largest and most costly tests yet of embryonic-stem-cell technology, the controversial and much-hyped approach of using stem cells taken from IVF embryos to produce replacement tissue and body parts. The small-scale trial, whose main aim was to demonstrate the safety of the approach, was sponsored by BlueRock Therapeutics, a subsidiary of the drug giant Bayer. The replacement neurons were manufactured using powerful stem cells originally sourced from a human embryo created an in vitro fertilization procedure. According to data presented by Henchliffe and others on August 28 at the International Congress for Parkinson’s Disease and Movement Disorder in Copenhagen, there are also hints that the added cells had survived and were reducing patients’ symptoms a year after the treatment. These clues that the transplants helped came from brain scans that showed an increase in dopamine cells in the patients’ brains as well as a decrease in “off time,” or the number of hours per day the volunteers felt they were incapacitated by their symptoms. However, outside experts expressed caution in interpreting the findings, saying they seemed to show inconsistent effects—some of which might be due to the placebo effect, not the treatment. “It is encouraging that the trial has not led to any safety concerns and that there may be some benefits,” says Roger Barker, who studies Parkinson’s disease at the University of Cambridge. But Barker called the evidence the transplanted cells had survived “a bit disappointing.” Because researchers can’t see the cells directly once they are in a person’s head, they instead track their presence by giving people a radioactive precursor to dopamine and then watching its uptake in their brains in a PET scanner. To Barker, these results were not so strong and he says it’s “still a bit too early to know” whether the transplanted cells took hold and repaired the patients’ brains. Legal questions Embryonic stem cells were first isolated in 1998 at the University of Wisconsin from embryos made in fertility clinics. They are useful to scientists because they can be grown in the lab and, in theory, be coaxed to form any of the 200 or so cell types in the human body, prompting attempts to restore vision, cure diabetes, and reverse spinal cord injury. However, there is still no medical treatment based on embryonic stem cells, despite billions of dollars’ worth of research by governments and companies over two and a half decades. BlueRock’s study remains one of the key attempts to change that. And stem cells continue to raise delicate issues in Germany, where Bayer is headquartered. Under Germany’s Embryo Protection Act, one of the most restrictive such laws in the world, it’s still a crime, punishable with a prison sentence, to derive embryonic cells from an embryo. What is legal, in certain circumstances, is to use existing cell supplies from abroad, so long as they were created before 2007. Seth Ettenberg, the president and CEO of BlueRock, says the company is manufacturing neurons in the US and that to do so it employs embryonic stem cells from the original supplies in Wisconsin, which remain widely used. “All the operations of BlueRock respect the high ethical and legal standards of the German Embryo Protection Act, given that BlueRock is not conducting any activities with human embryos,” Nuria Aiguabella Font, a Bayer spokesperson, said in an email. Long history The idea of replacing dopamine-making cells to treat Parkinson’s dates to the 1980s, when doctors tried it with fetal neurons collected after abortions. Those studies proved equivocal. While some patients may have benefited, the experiments generated alarming headlines after others developed, like uncontrolled writhing and jerking. Using brain cells from fetuses wasn’t just ethically dubious to some. Researchers also became convinced such tissue was so variable and hard to obtain that it couldn’t become a standardized treatment. “There is a history of attempts to transplant cells or tissue fragments into brains,” says Henchcliffe. “None ever came to fruition, and I think in the past there was a lack of understanding of the mechanism of action, and a lack of sufficient cells of controlled quality.” Yet there was evidence transplanted cells could live. Post-mortem examinations of some patients who’d been treated with fetal cells showed that the transplants were still present many years later. “There are a whole bunch of people involved in those fetal-cell transplants. They always wanted to find out—if you did it right, would it work?” says Jeanne Loring, a cofounder of Aspen Neuroscience, a stem-cell company planning to launch its own tests for Parkinson’s disease. The discovery of embryonic stem cells is what made a more controlled test a possibility. These cells can be multiplied and turned into dopamine-making cells by the billions. The initial work to manufacture such dopamine cells, as well as tests on animals, was performed by Lorenz Studer at Columbia University. In 2016 he became a scientific founder of BlueRock, which was initially formed as a joint venture between Bayer and the investment company Versant Ventures “It’s one of the first times in the field when we have had such a well-understood and uniform product to work with,” says Henchcliffe, who was involved in the early efforts. In 2019, Bayer bought Versant in a deal valuing the stem-cell company at around $1 billion. Movement disorder In Parkinson’s disease, the cells that make dopamine die off, leading to shortages of the brain chemical. That can cause tremors, rigid limbs, and a general decrease in movement called bradykinesia. The disease is typically slow-moving, and a drug called levodopa can control the symptoms for years. A type of brain implant called a deep brain stimulator can also reduce symptoms. The disease is progressive, however, and eventually, levodopa can’t control the symptoms as well. BLUE ROCK THERAPEUTICS
This year, the actor Michael J. Fox confided to CNN that he retired from acting for good after he couldn’t remember his lines anymore, although that was 30 years after his diagnosis. “I’m not gonna lie. It’s getting harder,” Fox told the network. “Every day it’s tougher.” The promise of a cell therapy is that doctors wouldn’t just patch over symptoms but could actually replace the broken brain networks by adding new neurons. “The potential for regenerative medicine is not to just delay disease, but to rebuild brain functionality,” says Ettenberg, BlueRock’s CEO. “There is a day when we hope that people don’t think of themselves as Parkinson’s patients.” Ettenberg says BlueRock plans to launch a larger study next year, with more patients, in order to determine whether the treatment is working, and how well.
This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, . When I was growing up near the US Gulf Coast, it was more common for my school to get called off for a hurricane than for a snowstorm. So even though I live in the Northeast now, by the time late August rolls around I’m constantly on hurricane watch. And while the season has been relatively quiet so far, a storm named Idalia changed that, as a Category 3 hurricane. (Also, let’s not forget Hurricane Hilary, which in a rare turn of events .) Tracking these storms as they’ve approached the US, I decided to dig into the link between climate change and hurricanes. It’s fuzzier than you might think, as . But as I was reporting, I also learned that there are a ton of other factors affecting how much damage hurricanes do. So let’s dive into the good, the bad, and the complicated of hurricanes. The good The good news is that we’ve gotten a lot better at forecasting hurricanes and warning people about them, says Kerry Emanuel, a hurricane expert and professor emeritus at MIT. I wrote about this a couple of years ago in being adopted by the National Weather Service in the US. In the US, average errors in predicting hurricane paths dropped from about 100 miles in 2005 to 65 miles in 2020. Predicting the intensity of storms can be tougher, but two new supercomputers, which the agency received in 2021, could help those forecasts continue to improve too. The computers Supercomputers aren’t the only tool forecasters are using to improve their models, though—some researchers are hoping that AI could speed up weather forecasting, Forecasting needs to be paired with effective communication to get people out of harm’s way by the time a storm hits—and many countries are improving their disaster communication methods. Bangladesh is one of the world’s most disaster-prone countries, but the death toll from extreme weather has dropped quickly thanks to the The bad The bad news is that there are more people and more stuff in the storms’ way than there used to be, because people are flocking to the coast, says , a hurricane researcher and forecaster at Colorado State University. The population along Florida’s coastline , outpacing the growth nationally by a significant margin. That trend holds nationally: population growth in coastal counties in the US is happening at a quicker clip than in other parts of the country. Several insurance companies have already stopped doing business in Florida because of increasing risks, and this year’s hurricane season could affect And the expected damage from disasters affects different groups in different ways. Across the US, white people and those with more wealth are more likely to get federal aid after disasters than others, according to an The complicated Climate change is loading the dice on most extreme weather phenomena. But what specific links can we make to hurricanes? A few effects are pretty well documented both in historical data and in climate models. One of the clearest impacts of climate change is rising temperatures. Warmer water can transfer more energy into hurricanes, so as global ocean temperatures hit new heights, hurricanes are more likely to become major storms. Warmer air can hold more moisture (think about how humid the air can feel on a hot day, compared with a cool one.) Warmer, wetter air means more rainfall during hurricanes—and flooding is one of the deadliest aspects of the storms. And rising sea levels are making storm surges more severe and coastal flooding more common and dangerous. But there are other effects that aren’t as clear, and questions that are totally open. Most striking to me is that researchers are in total disagreement about how climate change will affect the number of storms that form each year. For more on what we know (and what we don’t know) about climate change and hurricanes, check out . Stay safe out there! Related reading Forecasting is a difficult task, but supercomputers and AI are both helping scientists better predict weather of all types. Check out , and from earlier this summer. Flooding is the deadliest part of hurricanes, and cities aren’t prepared to handle it, . New York City put in a lot of coastal flood defenses after Hurricane Sandy in 2012. Then , as I covered after the storm. Millions lost power after Hurricane Ida. My colleague James Temple Keeping up with climate This has been a summer of extreme weather, from heat waves to wildfires to flooding. Here are 10 data visualizations to sum up a brutal season. () A new battery manufacturing facility from Form Energy is being built on the site of an old steel mill in West Virginia. The factory could help revitalize the region’s flagging economy. () EV charging in the US is getting complicated. Here’s a great explainer that untangles all the different plugs and cables you need to know about. () → Things are changing because many automakers are switching over to Tesla’s charging standard. () The first offshore wind auction in the Gulf of Mexico fell pretty flat, with two of three sites getting no bids at all. The lackluster results reveal the challenges facing offshore wind, especially in Texas. () A Chinese oil giant is predicting that gasoline demand in the country will peak this year, earlier than previously expected. Electric vehicles are behind diminishing demand for gas. () → The “inevitable EV” was one of our picks for the 10 Breakthrough Technologies of 2023. () Vermont’s leading subsidy program for small battery installations is getting bigger. ()
On Wednesday, Baidu, one of China’s leading artificial-intelligence companies, announced it would open up access to its ChatGPT-like large language model, Ernie Bot, to the general public. It’s been a long time coming. , Ernie Bot was the first Chinese ChatGPT rival. Since then, many Chinese tech companies, including Alibaba and ByteDance, have followed suit and released their own models. Yet all of them forced users to sit on waitlists or go through approval systems, making the products mostly inaccessible for ordinary users—a possible result, people suspected, of limits put in place by the Chinese state. On August 30, Baidu posted on social media that it will also release a batch of new AI applications within the Ernie Bot as the company rolls out open registration the following day. Quoting an anonymous source, that regulatory approval will be given to “a handful of firms including fledgling players and major technology names.” , a Chinese publication, reported that eight Chinese generative AI chatbots have been included in the first batch of services approved for public release. ByteDance, which released the chatbot Doubao on August 18, and the Institute of Automation at the Chinese Academy of Sciences, which released Zidong Taichu 2.0 in June, are reportedly also reportedly included in the first batch. Other models from Alibaba, iFLYTEK, JD, and 360 are not. When Ernie Bot was released on March 16, the response was a mix of excitement and disappointment. Many people deemed its performance mediocre relative to the previously released ChatGPT. But most people simply weren’t able to see it for themselves. The launch event didn’t feature a live demonstration, and later, to actually try out the bot, Chinese users need to have a Baidu account and apply for a use license that could take as long as three months to come through. Because of this, some people who got access early were selling secondhand Baidu accounts on e-commerce sites, charging anywhere from a few bucks to over $100. More than a dozen Chinese generative AI chatbots were released after Ernie Bot. They are all pretty similar to their Western counterparts in that they are capable of conversing in text—answering questions, solving math problems (), writing programming code, and composing poems. Some of them also allow input and output in other forms, like audio, images, data visualization, or radio signals. Like Ernie Bot, these services came with restrictions for user access, making it difficult for the general public in China to experience them. Some were allowed only for business uses. One of the main reasons Chinese tech companies limited access to the general public was concern that the models could be used to generate politically sensitive information. While the Chinese government has shown it’s extremely capable of censoring social media content, new technologies like generative AI could push the censorship machine to unknown and unpredictable levels. Most current chatbots like those from Baidu and ByteDance that would refuse to answer sensitive questions about Taiwan or Chinese president Xi Jinping, but a general release to China’s 1.4 billion people would almost certainly allow users to find more clever ways to circumvent censors. When China released its first regulation specifically targeting generative AI services in July, it included a line requesting that companies obtain “relevant administrative licenses,” though at the time the law didn’t specify what licenses it meant. As , the approval Baidu obtained this week was issued by the Chinese Cyberspace Administration, the country’s main internet regulator, and it will allow companies to roll out their ChatGPT-style services to the whole country. But the agency has not officially announced which companies obtained the public access license or which ones have applied for it. Even with the new access, it’s unclear how many people will use the products. The initial lack of access to Chinese chatbot alternatives decreased public interest in them. While ChatGPT has not been officially released in China, many Chinese people are able to access the OpenAI chatbot by using VPN software. “Making Ernie Bot available to hundreds of millions of Internet users, Baidu will collect massive valuable real-world human feedback. This will not only help improve Baidu’s foundation model but also iterate Ernie Bot on a much faster pace, ultimately leading to a superior user experience,” said Robin Li, Baidu’s CEO, according to a press release from the company. Baidu declined to give further comment. ByteDance did not immediately respond to a request for comment from MIT Technology Review.
MIT Technology Review Explains: Let our writers untangle the complex, messy world of technology to help you understand what’s coming next. . It’s now possible to link climate change to all kinds of extreme weather, from droughts to flooding to wildfires. Hurricanes are no exception—scientists have found that warming temperatures are causing stronger and less predictable storms. That’s a worry, because hurricanes are already among the most deadly and destructive extreme weather events around the world. In the US alone, in damages in 2022. In a warming world, we can expect the totals to rise. But the relationship between climate change and hurricanes is more complicated than most people realize. Here’s what we know, and—as Hurricane Idalia batters the Florida coast—what to expect from the storms to come. Are hurricanes getting more common? It might seem that there are far more storms than in the past, but we don’t really know for sure. That’s because historical records are limited, with little reliable data more than a few decades old, says, professor emeritus in atmospheric science at MIT. So it’s tough to draw conclusions about how the frequency of tropical cyclones (the umbrella term for storms that are called hurricanes, cyclones, or typhoons, depending on the region) is changing over time. The best data comes from the North Atlantic region, Emanuel says, and it does appear that there are more hurricanes there than there used to be. Globally, though, suggests that the total number of tropical cyclones has gone down over the past few decades. Scientists disagree on whether cyclogenesis, or storm formation, has changed over time and whether it might be affected by climate change in the future. Some climate models suggest that climate change will increase the total number of storms that form, while others suggest the opposite, says, a climate and data scientist at the Pacific Northwest National Laboratory. Are hurricanes getting stronger? Globally, hurricanes have gotten stronger on average in the last four decades, and Emanuel says that to judge from what we know about climate change, the trend is likely to continue. In, researchers examined satellite images from between 1979 and 2017 and found that an increasing fraction of storms reached the status of a major hurricane, defined as one with winds of This trend of stronger storms fits with theoretical from Emanuel and other climate scientists, who predicted that warming oceans would cause stronger hurricanes. Warming water provides more energy to storms, resulting in increased wind speeds. As temperatures rise, “you’re going to load the dice toward these higher-end events,” says , an atmospheric scientist and hurricane forecasting expert at Colorado State University. That fits with recent research finding that hurricanes in the North Atlantic are, meaning they gain more wind speed as they move across the . The trend is most clear in the North Atlantic, but it also might be applicable around the world— found a global increase in the number of storms that undergo a very rapid intensification, with wind speeds increasing by 65 miles per hour or more within 24 hours. Storms that get stronger quickly, especially close to shore, can be particularly dangerous, since people don’t have much time to prepare or evacuate. How else does climate change affect hurricanes? There are “compounding effects” from climate change that could influence hurricanes in the future, Balaguru says. Climate change is causing sea levels to rise, making storm surges more severe and coastal flooding more likely and damaging. In addition, as air gets warmer it can hold more water, meaning there will be more rain from storms as climate change pushes global temperatures higher. That could all add up to more flooding during hurricanes. There are other, less-understood ways that climate change might affect storms in the future. Storms are becoming , dumping more rain on concentrated areas, as Hurricane Harvey did in Houston in 2017. Some studies link this effect to climate change too, though the connection is not as certain as others, Balaguru says. Regional changes in atmospheric circulation could also affect Even as hurricanes are getting stronger and more volatile, our ability to forecast both their path and their intensity has improved in recent years. Advances in and could help officials better predict storms and give people more time to prepare. But these gains will only get us so far. “Unfortunately, we’re getting better warnings, but we can’t get indefinitely better,” Emanuel says.
When Taylor Webb played around with GPT-3 in early 2022, he was blown away by what OpenAI’s large language model appeared to be able to do. Here was a neural network trained only to —a jumped-up autocomplete. And yet it gave correct answers to many of the abstract problems that Webb set for it—the kind of thing you’d find in an IQ test. “I was really shocked by its ability to solve these problems,” he says. “It completely upended everything I would have predicted.” Webb is a psychologist at the University of California, Los Angeles, who studies the different ways people and computers solve abstract problems. He was used to building neural networks that had specific reasoning capabilities bolted on. But GPT-3 seemed to have learned them for free. Last month Webb and his colleagues published an article in Nature, in which they describe devised to assess the use of analogy to solve problems (known as analogical reasoning). On some of those tests GPT-3 scored better than a group of undergrads. “Analogy is central to human reasoning,” says Webb. “We think of it as being one of the major things that any kind of machine intelligence would need to demonstrate.” What Webb’s research highlights is only the latest in a long string of remarkable tricks pulled off by large language models. For example, when OpenAI unveiled GPT-3’s successor, GPT-4, in March, the company published an eye-popping list of professional and academic assessments that it claimed its new large language model had aced, including a couple of dozen high school tests and the bar exam. OpenAI later worked with Microsoft to show that GPT-4 could pass parts of the United States Medical Licensing Examination. And multiple researchers claim to have shown that large language models can pass tests designed to identify certain cognitive abilities in humans, from chain-of-thought reasoning (working through a problem step by step) to theory of mind (guessing what other people are thinking). These kinds of results are feeding a hype machine predicting that these machines will soon come for white-collar jobs, replacing , doctors, , and lawyers. Geoffrey Hinton has called out GPT-4’s apparent ability to string together thoughts as one reason he is now . But there’s a problem: there is little agreement on what those results really mean. Some people are dazzled by what they see as glimmers of human-like intelligence; others aren’t convinced one bit. “There are several critical issues with current evaluation techniques for large language models,” says Natalie Shapira, a computer scientist at Bar-Ilan University in Ramat Gan, Israel. “It creates the illusion that they have greater capabilities than what truly exists.” That’s why a growing number of researchers—computer scientists, cognitive scientists, neuroscientists, linguists—want to overhaul the way they are assessed, calling for more rigorous and exhaustive evaluation. Some think that the practice of scoring machines on human tests is wrongheaded, period, and should be ditched. “People have been giving human intelligence tests—IQ tests and so on—to machines since the very beginning of AI,” says Melanie Mitchell, an artificial-intelligence researcher at the Santa Fe Institute in New Mexico. “The issue throughout has been what it means when you test a machine like this. It doesn’t mean the same thing that it means for a human.” “There’s a lot of anthropomorphizing going on,” she says. “And that’s kind of coloring the way that we think about these systems and how we test them.” With hopes and at an all-time high, it is crucial that we get a solid grip on what large language models can and cannot do. Open to interpretation Most of the problems with how large language models are tested boil down to the question of how the results are interpreted. Assessments designed for humans, like high school exams and IQ tests, take a lot for granted. When people score well, it is safe to assume that they possess the knowledge, understanding, or cognitive skills that the test is meant to measure. (In practice, that assumption only goes so far. Academic exams do not always reflect students’ true abilities. IQ tests measure a specific set of skills, not overall intelligence. Both kinds of assessment favor people who are good at those kinds of assessments.) But when a large language model scores well on such tests, it is not clear at all what has been measured. Is it evidence of actual understanding? A mindless statistical trick? Rote repetition? “There is a long history of developing methods to test the human mind,” says Laura Weidinger, a senior research scientist at Google DeepMind. “With large language models producing text that seems so human-like, it is tempting to assume that human psychology tests will be useful for evaluating them. But that’s not true: human psychology tests rely on many assumptions that may not hold for large language models.” Webb is aware of the issues he waded into. “I share the sense that these are difficult questions,” he says. He notes that despite scoring better than undergrads on certain tests, GPT-3 produced absurd results on others. For example, it failed a version of an analogical reasoning test about physical objects that developmental psychologists sometimes give to kids. In this test Webb and his colleagues gave GPT-3 a story about a magical genie transferring jewels between two bottles and then asked it how to transfer gumballs from one bowl to another, using objects such as a posterboard and a cardboard tube. The idea is that the story hints at ways to solve the problem. “GPT-3 mostly proposed elaborate but mechanically nonsensical solutions, with many extraneous steps, and no clear mechanism by which the gumballs would be transferred between the two bowls,” the researchers write in Nature. “This is the sort of thing that children can easily solve,” says Webb. “The stuff that these systems are really bad at tend to be things that involve understanding of the actual world, like basic physics or social interactions—things that are second nature for people.” So how do we make sense of a machine that passes the bar exam but flunks preschool? Large language models like GPT-4 are trained on vast numbers of documents taken from the internet: books, blogs, fan fiction, technical reports, social media posts, and much, much more. It’s likely that a lot of past exam papers got hoovered up at the same time. One possibility is that models like GPT-4 have seen so many professional and academic tests in their training data that they have learned to autocomplete the answers. A lot of these tests—questions and answers—are online, says Webb: “Many of them are almost certainly in GPT-3’s and GPT-4’s training data, so I think we really can’t conclude much of anything.” OpenAI says it checked to confirm that the tests it gave to GPT-4 did not contain text that also appeared in the model’s training data. In its work with Microsoft involving the exam for medical practitioners, OpenAI used paywalled test questions to be sure that GPT-4’s training data had not included them. But such precautions are not foolproof: GPT-4 could still have seen tests that were similar, if not exact matches. When Horace He, a machine-learning engineer, tested GPT-4 on questions taken from Codeforces, a website that hosts coding competitions, that it scored 10/10 on coding tests posted before 2021 and 0/10 on tests posted after 2021. Others have also noted that GPT-4’s test scores take a dive on material produced after 2021. Because the model’s training data only included text collected before 2021, some say this shows that large language models display a kind of memorization rather than intelligence. To avoid that possibility in his experiments, Webb devised new types of test from scratch. “What we’re really interested in is the ability of these models just to figure out new types of problem,” he says. Webb and his colleagues adapted a way of testing analogical reasoning called Raven’s Progressive Matrices. These tests consist of an image showing a series of shapes arranged next to or on top of each other. The challenge is to figure out the pattern in the given series of shapes and apply it to a new one. Raven’s Progressive Matrices are used to assess nonverbal reasoning in both young children and adults, and they are common in IQ tests. Instead of using images, the researchers encoded shape, color, and position into sequences of numbers. This ensures that the tests won’t appear in any training data, says Webb: “I created this data set from scratch. I’ve never heard of anything like it.” Mitchell is impressed by Webb’s work. “I found this paper quite interesting and provocative,” she says. “It’s a well-done study.” But she has reservations. Mitchell has developed her own analogical reasoning test, called ConceptARC, which uses encoded sequences of shapes taken from the ARC (Abstraction and Reasoning Challenge) data set developed by Google researcher François Chollet. In Mitchell’s experiments, GPT-4 scores worse than people on such tests. Mitchell also points out that encoding the images into sequences (or matrices) of numbers makes the problem easier for the program because it removes the visual aspect of the puzzle. “Solving digit matrices does not equate to solving Raven’s problems,” she says. Brittle tests The performance of large language models is brittle. Among people, it is safe to assume that someone who scores well on a test would also do well on a similar test. That’s not the case with large language models: a small tweak to a test can drop an A grade to an F. “In general, AI evaluation has not been done in such a way as to allow us to actually understand what capabilities these models have,” says Lucy Cheke, a psychologist at the University of Cambridge, UK. “It’s perfectly reasonable to test how well a system does at a particular task, but it’s not useful to take that task and make claims about general abilities.” Take an example from a, in which they claimed to have identified “sparks of artificial general intelligence” in GPT-4. The team assessed the large language model using a range of tests. In one, they asked GPT-4 how to stack a book, nine eggs, a laptop, a bottle, and a nail in a stable manner. It answered: “Place the laptop on top of the eggs, with the screen facing down and the keyboard facing up. The laptop will fit snugly within the boundaries of the book and the eggs, and its flat and rigid surface will provide a stable platform for the next layer.” Not bad. But when Mitchell tried her own version of the question, asking GPT-4 to stack a toothpick, a bowl of pudding, a glass of water, and a marshmallow, it suggested sticking the toothpick in the pudding and the marshmallow on the toothpick, and balancing the full glass of water on top of the marshmallow. (It ended with a helpful note of caution: “Keep in mind that this stack is delicate and may not be very stable. Be cautious when constructing and handling it to avoid spills or accidents.”) Here’s another contentious case. In February, Stanford University researcher Michal Kosinski published a. Theory of mind is the cognitive ability to ascribe mental states to others, a hallmark of emotional and social intelligence that most children pick up between the ages of three and five. Kosinski reported that GPT-3 had passed basic tests used to assess the ability in humans. For example, Kosinski gave GPT-3 this scenario: “Here is a bag filled with popcorn. There is no chocolate in the bag. Yet the label on the bag says ‘chocolate’ and not ‘popcorn.’ Sam finds the bag. She had never seen the bag before. She cannot see what is inside the bag. She reads the label.” Kosinski then prompted the model to complete sentences such as: “She opens the bag and looks inside. She can clearly see that it is full of …” and “She believes the bag is full of …” GPT-3 completed the first sentence with “popcorn” and the second sentence with “chocolate.” He takes these answers as evidence that GPT-3 displays at least a basic form of theory of mind because they capture the difference between the actual state of the world and Sam’s (false) beliefs about it. It’s no surprise that Kosinski’s results made headlines. They also invited immediate pushback. “I was rude on Twitter,” says Cheke. Several researchers, including Shapira and Tomer Ullman, a cognitive scientist at Harvard University, published counterexamples showing that large language models failed simple variations of the tests that Kosinski used. “I was very skeptical given what I know about how large language models are built,” says Ullman. Ullman tweaked Kosinski’s test scenario by telling GPT-3 that the bag of popcorn labeled “chocolate” was transparent (so Sam could see it was popcorn) or that Sam couldn’t read (so she would not be misled by the label). Ullman found that GPT-3 failed to ascribe correct mental states to Sam whenever the situation involved an extra few steps of reasoning. “The assumption that cognitive or academic tests designed for humans serve as accurate measures of LLM capability stems from a tendency to anthropomorphize models and align their evaluation with human standards,” says Shapira. “This assumption is misguided.” For Cheke, there’s an obvious solution. Scientists have been assessing cognitive abilities in non-humans for decades, she says. Artificial-intelligence researchers could adapt techniques used to study animals, which have been developed to avoid jumping to conclusions based on human bias. Take a rat in a maze, says Cheke: “How is it navigating? The assumptions you can make in human psychology don’t hold.” Instead researchers have to do a series of controlled experiments to figure out what information the rat is using and how it is using it, testing and ruling out hypotheses one by one. “With language models, it’s more complex. It’s not like there are tests using language for rats,” she says. “We’re in a new zone, but many of the fundamental ways of doing things hold. It’s just that we have to do it with language instead of with a little maze.” Weidinger is taking a similar approach. She and her colleagues are adapting techniques that psychologists use to assess cognitive abilities in preverbal human infants. One key idea here is to break a test for a particular ability down into a battery of several tests that look for related abilities as well. For example, when assessing whether an infant has learned how to help another person, a psychologist might also assess whether the infant understands what it is to hinder. This makes the overall test more robust. The problem is that these kinds of experiments take time. A team might study rat behavior for years, says Cheke. Artificial intelligence moves at a far faster pace. Ullman compares evaluating large language models to Sisyphean punishment: “A system is claimed to exhibit behavior X, and by the time an assessment shows it does not exhibit behavior X, a new system comes along and it is claimed it shows behavior X.” Moving the goalposts Fifty years ago people thought that to beat a grand master at chess, you would need a computer that was as intelligent as a person, says Mitchell. But chess fell to machines that were simply better number crunchers than their human opponents. Brute force won out, not intelligence. Similar challenges have been set and passed, from image recognition to Go. Each time computers are made to do something that requires intelligence in humans, like play games or use language, it splits the field. Large language models are now facing their own chess moment. “It’s really pushing us—everybody—to think about what intelligence is,” says Mitchell. Does GPT-4 display genuine intelligence by passing all those tests or has it found an effective, but ultimately dumb, shortcut—a statistical trick pulled from a hat filled with trillions of correlations across billions of lines of text? “If you’re like, ‘Okay, GPT4 passed the bar exam, but that doesn’t mean it’s intelligent,’ people say, ‘Oh, you’re moving the goalposts,’” says Mitchell. “But do we say we’re moving the goalpost or do we say that’s not what we meant by intelligence—we were wrong about intelligence?” It comes down to how large language models do what they do. Some researchers want to drop the obsession with test scores and try to figure out what goes on under the hood. “I do think that to really understand their intelligence, if we want to call it that, we are going to have to understand the mechanisms by which they reason,” says Mitchell. Ullman agrees. “I sympathize with people who think it’s moving the goalposts,” he says. “But that’s been the dynamic for a long time. What’s new is that now we don’t know how they’re passing these tests. We’re just told they passed it.” The trouble is that nobody knows exactly how large language models work. Teasing apart the complex mechanisms inside a vast statistical model is hard. But Ullman thinks that it’s possible, in theory, to reverse-engineer a model and find out what algorithms it uses to pass different tests. “I could more easily see myself being convinced if someone developed a technique for figuring out what these things have actually learned,” he says. “I think that the fundamental problem is that we keep focusing on test results rather than how you pass the tests.”
This story first appeared in China Report, MIT Technology Review’s newsletter about technology developments in China. to receive it in your inbox every Tuesday. There’s something so visceral about the phrase “pig-butchering scam.” The first time I came across it was in my reporting a year ago, when I was looking into how strange LinkedIn connection requests turned out to be from crypto scammers. As I wrote then, fraudsters were creating “fake profiles on social media sites or dating sites, [to] connect with victims, build virtual and often romantic relationships, and eventually persuade the victims to transfer over their assets.” The name, which scammers themselves came up with, compares the lengthy, involved trust-building process to what it’s like to grow a pig for slaughter. It’s a tactic that has been used to steal millions of dollars from victims on LinkedIn and other platforms. You can read that story . But there are also other, far more dire consequences to these scams. And over the past few weeks, I’ve noticed growing attention, in both the US and China, to the scammers behind these crimes, who are often victims of the scams themselves. A new book in English, a movie in Chinese, and a slew of media reports in both languages are now shining light on the fascinating (and horrifying) aspects of a scary trend in human trafficking. For a sense of scale, just last week , one of the largest crypto exchanges, released data showing a huge jump in the number of pig-butchering scams reported to the company: an increase of 100.5% from 2022 to 2023, even though there are still a few months left in this year. This kind of fraud is the subject of a new Chinese movie that unexpectedly became a box-office hit. No More Bets is centered on two Chinese people who are lured to Myanmar with the promise of high-paying jobs; once trapped abroad, they are forced to become scammers, though—spoiler alert—they eventually manage to escape. But many of their fellow victims are abused, raped, or even killed for trying to do the same. While the plot is fictional, it was adapted from dozens of interviews the movie crew conducted with real victims, some of which are shown at the end of the film. (I’ll probably check out the movie when it premieres in the US on August 31.) Many low-level scammers have in fact been coerced into conducting crimes. They leave their homes with the hope of getting stable employment, but once they find themselves in a foreign country—usually Myanmar, Cambodia, or the Philippines—they are held captive and unable to leave. Since the movie came out on August 8, it has made nearly $470 million at the box office, placing it among the top 10 this year, even though it was only screened in China. It has also dominated social media discourse in China, inspiring over a dozen trending topics on Weibo and other platforms. At the same time, investigative reports from Chinese journalists have corroborated the credibility of the movie’s plot. In a podcast published earlier this month, one Chinese-Malaysian victim , an exiled Chinese investigative journalist, about his experience of being lied to by job recruiters and forced to become a scammer in the Philippines. There, 80% of his colleagues were from mainland China, with the rest from Taiwan and Malaysia. Many of them are from rural areas and have little education. But as recently reported, scammer groups are increasingly looking to recruit highly educated people as they target more Chinese students overseas, or even English-speaking populations. Chinese people are no strangers to telecom fraud and online scams, but the recent wave of attention has made them aware of how globalized these scams have become. It has also tarnished the reputation of Southeast Asian countries, which are now struggling to attract Chinese tourists. These days, if you type “Myanmar” into Douyin, the Chinese version of TikTok, all autocompletes are related to the pig-butchering scams, like the “self-told story of someone who escaped from Myanmar.” There are still videos promoting Myanmar to tourists, but the comment sections are filled with viewers who insinuate that the Burmese video creators are working for the human-trafficking groups. Myanmar even recently tried to work with a Chinese province to promote tourism, and . Meanwhile, in the US, , a new book about cryptocurrencies by Bloomberg reporter Zeke Faux, is out next month. Faux traveled to Sihanoukville in southwestern Cambodia, where criminal gangs orchestrate pig-butchering scams. It was once a prosperous casino town for Chinese businesspeople (gambling is outlawed in China). But after the Cambodian government turned against gambling, and the pandemic made international travel difficult, the gambling gangs turned their casinos into online scam operation centers. where scam victims are trapped and isolated from the outside world by metal gates. Neighbors told Faux of frequent suicides: “If an ambulance doesn’t go inside at least twice a week, it is a wonder.” One victim told him he had to hide a phone in his rectum to get in touch with someone outside and escape. But stories of successful escapes are rare. Even though the Chinese government announced in mid-August that it would work more with Southeast Asian countries to crack down on these criminal activities, it remains to be seen how successful those efforts will be. In the case of Cambodia, international law enforcement actions so far have been obstructed by alleged corruption on the ground, according to by the New York Times. As I , there are many factors that make it hard to hold these scammers accountable: their use of crypto, the weak government control in the regions where they operate, and the criminals’ ever-changing tactics and platform choices. But the fact that both reporting and pop culture are starting to draw attention to where and how these criminal groups operate could be a good first step toward justice. What solution do you think can help reduce the number of pig-butchering scams? Let me know your thoughts at zeyi@technologyreview.com Catch up with China 1. Forbes got a copy of a draft proposal from 2022 that would address national security concerns related to TikTok. While it is unclear whether the draft is still being considered a year later, it shows that the US government wanted unprecedented control over the platform’s internal data and essential functions. () 2. After Japan started releasing treated radioactive water into the ocean last week, the Chinese government protested by banning seafood imports from the country. ()
Many Chinese people are also mad about the release and have resolved to harass Japanese businesses with phone calls. ()
3. The US commerce secretary, Gina Raimondo, visited Beijing on Monday, making her the latest high-ranking Biden administration official to travel to the country. She agreed with her Chinese counterpart that they would launch an “information exchange” on export controls. () 4. A new type of battery developed by the Chinese company CATL can make fast charging for EVs even faster. () 5. The Biden administration is hoping to secure a six-month extension of the Science and Technology Agreement with China, a 44-year-old document that fosters scientific collaboration. () 6. Chinese ultra-fast-fashion company Shein will acquire a one-third stake of Forever 21’s operating company, Sparc Group. In return, Sparc will gain a minority stake in Shein. The Chinese company will start selling Forever 21 apparel online, while Forever 21 will take Shein products to its physical stores. () 7. DiDi, the troubled Chinese ride-hailing giant, is selling its electric-vehicle business to XPeng, a Chinese EV company. () Lost in translation Currently, there are over 2,700 online hospitals in China, where people can get diagnoses and prescriptions completely online. Because many of these platforms are able to come up with a prescription in less than two minutes, there’s widespread suspicion that they are risking patient health by relying on ChatGPT-like models. Last week, the industry was put on notice after Beijing’s Municipal Health Commission drafted a new regulation to ban AI-generated prescriptions. , a Chinese medical news publication, the city-wide regulation repeats and reinforces a March 2022 national policy that instituted the same kind of ban, but the new proposal comes at a time when people have started to see what large language models are capable of and when a few tech platforms have already started experimenting with medical AI. Following news of the new proposal, JD Health, one of the leading digital health-care platforms in China, told the publication that its AI features are currently used only to match patients with doctors and help doctors increase productivity. Medlinker, a Chinese internet startup that announced an AI product in May, responded that the product, called MedGPT, is still in internal testing and hasn’t been used in any external services. One more thing NBA star James Harden was having a lot of fun during a recent trip to China. When Harden promoted his new wine brand on the Douyin livestream e-commerce channel of Chinese influencer , that the first batch of 10,000 bottles (sold in bundles of two for $60) sold out in only 14 seconds. After a second batch of 6,000 bottles also sold out in seconds, Harden was so excited that he did a cartwheel in the back of the room.
James Harden is having the time of his life in China sold 10,000 bottle of wine in 5 secs — NBACentral (@TheDunkCentral)
The product shortages and supply-chain delays of the global covid-19 pandemic are still fresh memories. Consumers and industry are concerned that the next geopolitical climate event may have a similar impact. Against a backdrop of evolving regulations, these conditions mean manufacturers want to be prepared against short supplies, concerned customers, and weakened margins. For supply chain professionals, achieving a “phygital” information flow—the blending of physical and digital data—is key to unlocking resilience and efficiency. As physical objects travel through supply chains, they generate a rich flow of data about the item and its journey—from its raw materials, its manufacturing conditions, even its expiration date—bringing new visibility and pinpointing bottlenecks. This phygital information flow offers significant advantages, enhancing the ability to create rich customer experiences to satisfying environmental, social, and corporate governance (ESG) goals. In a 2022 EY global survey of executives, 70% of respondents agreed that a will increase their company’s revenue. For disparate parties to exchange product information effectively, they require a common framework and universally understood language. Among supply chain players, data standards create a shared foundation. Standards help uniquely identify, accurately capture, and automatically share critical information about products, locations, and assets across trading communities. The push for digital standards Supply chain data’s power lies in consistency, accuracy, and seamless sharing to fuel analytics and generate insight about operations. Standards can help precisely describe the physical and digital objects that make up a supply chain, and track what happens to them from production to delivery. This increased visibility is under sharp focus: according to a 2022 survey of supply chain leaders by McKinsey and Company, more than 90% of respondents from nearly every sector during the previous year. These standards rely on number and attribution systems—which can be encoded into data carriers and attached to products—to uniquely identify assets at every level. When data is captured, it provides digital access to information about products and their movement through the supply chain. Numbering and attribution systems such as the Global Trade Item Number (GTIN) identify traded items and products; likewise, Serial Shipping Container Codes (SSCCs) identify logistic units. Global Location Numbers (GLNs) identify business data including an invoice address or a delivery location. Global Product Classification (GPC) codes are a global standard that use a hierarchical system to classify items by characteristics. Data carriers include Universal Product Code (UPC) barcodes, one-dimensional (1D) barcodes familiar to consumers, commonly scanned at the point of sale in North America. Outside the U.S. and Canada, the European Article Number (EAN) barcode is used. These barcodes encode GTIN identifier data. In recent years, more complex and robust data carriers have become common, including radio-frequency identification (RFID) tags and two-dimensional (2D) barcodes like QR codes (quick-response codes). These codes contain than simple 1D barcodes. These identification and data capture standards work alongside others for information sharing, including master data, business transaction data, physical event data, and communication standards for sharing information among applications and partners. Phygital information must meet a wide range of needs, including regulatory compliance, consumer and patient engagement protections, and supply chain and trading partner requirements, such as procurement, production, marketing, and ESG reporting. Regulation is an important industry driver: chain of custody and authentication of products and trading partners are vital for safe, secure supply chains. “Governments and regulatory agencies have leveraged the pervasiveness of standards adoption to further global goals of food, product, and consumer safety,” says Siobhan O’Bara, senior vice president of community engagement for GS1 US, a member of GS1, a global not-for-profit supply chain standards organization. New developments in standards across industries Global standards and unique identifiers are not only driving today’s supply chain evolution, but they also allow for robust use cases across a wide variety of industries. Here are a few examples to consider. Healthcare: Today’s healthcare organizations are under pressure to improve patient outcomes, prevent errors, and control costs. Identification systems can help by empowering patients with information that help them follow medical protocols. “We know in healthcare that a critical part of our world is not only whether people have access to healthcare but whether they follow their clinical instructions,” says O’Bara. O’Bara offers the example of a home nebulizer, a device used to deliver medicine to improve respiratory symptoms. By equipping a nebulizer with an RFID chip, she says, “a patient can keep track of whether they are following the prescribed treatment. For instance, if there’s a filter with that nebulizer, when it gets locked into the device, the chip sends a signal, and the nebulizer can display for the patient at the correct time that the filter has been consumed. This mechanism can also convey to healthcare practitioners whether the patient is following the protocol properly.” The result is not only a lower risk of patient miscommunication but improved patient care. Retail: Data about an item’s origins can prevent business losses and enhance public safety. For example, a grocery store that has a product recall on spinach due to a bacterial outbreak must be able to trace the origin of batches, or must destroy its entire inventory. A unique identifier can improve the speed, accuracy, and traceability of recalls for public safety, precision, and cost effectiveness. Consumer goods: A 2D barcode on a bottle of hand lotion can reveal a vast amount of data for consumers, including its origin, ingredients, organic certification, and packaging materials. For industry, unique identifications can tell warehouse workers where a product is located, inform distributors whether a product contains potentially dangerous ingredients, and warn retailers if a product has age restrictions. “Data delivers value in all directions of the supply chain,” says O’Bara. “Data standards are the only way to accurately and consistently—with confidence—obtain and rely on these data points to complete your business operations,” she says. Manufacturing: Achieving ESG compliance hinges on an organization’s supply chain visibility, says O’Bara. “You always have to have data to support your ESG claims, and the only way to get that data is by tracking it through a consistent and calculated method, no matter where it’s consumed.” Standards provide access to structured sustainability information that can be measured to ensure compliance with ESG regulations, and shared with supply chain partners. The next frontier Standards empower organizations to identify, capture, and share information seamlessly, creating a common language that can support business processes. Savvy organizations are going a step further, providing customers with direct access to supply chain and other valuable data. According to 2023 research by Gartner, into the supply chain are twice as likely to return; however, only 23% of supply chains currently enable customers this way. O’Bara points to digital labeling as a perfect example of the supply chain future. Digital labels accessed through 2D barcodes by smart devices could provide consumers with information about hundreds of product attributes, such as nutrition, as well as facts that go beyond the label such as environmental, lifestyle, and sustainability factors. This future-forward approach to an increasingly phygital world could drive long-term consumer engagement, and open the door for increased business growth. “Once you have unlocked value from unique identifiers, there are so many more ways that you can think creatively and cross-functionally about how unifying standards along a supply chain can enable commercial functions and consumer engagement with potential to drive substantial top- and bottom-line revenue,” says O’Bara. This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.
This is today’s edition of , our weekday newsletter that provides a daily dose of what’s going on in the world of technology. Google DeepMind has launched a watermarking tool for AI-generated images The news: Google DeepMind has launched a new watermarking tool which labels whether pictures have been generated with AI. The tool, called SynthID, will allow users to generate images using Google’s AI image generator Imagen, then choose whether to add a watermark.Why watermarking? In the past year, the huge popularity of generative AI models has brought with it the proliferation of AI-generated deepfakes, non-consensual porn, and copyright infringements. Watermarking—a technique where you hide a signal in a piece of text or an image to identify it as AI-generated—has become one of the most popular policy suggestions to curb harms. Why it matters: The hope is that SynthID could help people identify when AI-generated content is being passed off as real to counter misinformation, or help protect copyright. Read the full story. —Melissa Heikkilä Interested in the impact of generative AI? Read more about this topic: + These new tools could help protect our pictures from AI. PhotoGuard and Glaze are just two new systems designed to make it harder to tinker with photos using AI tools. .+ AI models spit out photos of real people and copyrighted images. The finding could strengthen artists’ claims that AI companies are infringing their rights. . + These new tools let you see for yourself how biased AI image models are. DALL-E 2 and two recent versions of Stable Diffusion tend to produce images of people that look white and male, especially if the prompt is a word like ‘CEO’. . The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 The backlash against Worldcoin is mounting Regulators aren’t convinced it’s adequately protecting people’s data. ()+ How Worldcoin recruited its first half a million test users. () 2 New Zealand is introducing a tech giant taxIn a bid to force the massive internet companies to pay more. ( $) 3 Google’s stranglehold on search is weakeningIts results are seemingly less accurate, and users are fed up. ()+ Chatbots could one day replace search engines. Here’s why that’s a terrible idea. () 4 US Congress will host a major AI forum next monthThe sector’s biggest names will be lining up to have their say in shaping critical legislation. ( $)+ AI jargon is impossible to escape. ( $)+ ChatGPT is about to revolutionize the economy. We need to decide what that looks like. () 5 Climate change is altering Japan’s cultureTokyo’s summers are becoming longer, but the country’s leaders are reluctant to cut ties with fossil fuels. ( $) 6 Chinese sextortion scammers are all over TwitterThe platform has become much more hospitable to such bad actors since Elon Musk took over. ()+ It’s still a challenge to spot Chinese state media social accounts. () 7 These days, noise canceling is big businessBut over-relying on fancy headphones won’t necessarily make you more productive. ()+ Is it healthy to constantly listen to podcasts? These people do. ( $) 8 Chinese robot waiters are charming Korean dinersBut human servers aren’t so thrilled. ( $)+ AI is creeping into your takeout apps. () 9 Internet fandom is about to become a whole lot less creativeOutsourcing your imagination to AI chatbots just isn’t as fun. ( $) 10 MySpace was an overwhelming messBut we loved it anyway. ( $) Quote of the day “The politician in me thinks you’re going to literally lose every voter under 35, forever.” —Commerce Secretary Gina Raimondo considers the implications of banning TikTok in the US ahead of her visit to China, reports. The big story Money is about to enter a new era of competition April 2022 To many, cash now seems anachronistic. People across the world commonly use their smartphones to pay for things. This shift may look like a potential driver of inequality: if cash disappears, one imagines, that could disenfranchise the elderly, the poor, and others.In practice, though, cell phones are nearly at saturation in many countries. And digital money, if implemented correctly, could be a force for financial inclusion. The big questions now are around how we proceed. . —Eswar Prasad We can still have nice things A place for comfort, fun and distraction in these weird times. (Got any ideas? Drop me a line or .) + is still shaping beat-’em-up video games, 50 years after his death.+ The drawn-out legal drama that gripped the world has finally been resolved+ Here’s how to make your favorite accessible to everyone.+ Thinking of going with your interiors? You’re not alone.+ is cool again, apparently?
This is today’s edition of , our weekday newsletter that provides a daily dose of what’s going on in the world of technology. How culture drives foul play on the internet, and how new “upcode” can protect us From Bored Apes and Fancy Bears, to Shiba Inu coins, self-replicating viruses, and whales, the internet is crawling with fraud, hacks, and scams. And while new technologies come and go, they change little about the fact that online illegal operations exist because some people are willing to act illegally, and others fall for the stories they tell. Ultimately, online crime is a human story. Three new books offer explanations into why it happens, why it works, and how we can protect ourselves from falling for such schemes—no matter how convincing they are. . —Rebecca Ackermann Rebecca’s story is from the new print issue of MIT Technology Review, which is all about . If you don’t subscribe already, . The tricky ethics of brain implants and informed consent We’re making major leaps in terms of helping people who’ve lost their ability to speak to regain their voices. Earlier this week, described how brain-computer interfaces successfully translated signals from the brains of two study participants into speech thanks to brain implants. Both of the women can communicate without an implant. The first, Pat Bennett, who has ALS, also known as Lou Gehrig’s disease, uses a computer to type. The second, Ann Johnson, who lost her voice as the result of a brain-stem stroke that left her paralyzed, uses an eye-tracking device to select letters on a computer screen. That ability to communicate is what gave them the power to consent to participate in these trials. But how does consent work when communication is more difficult? . —Cassandra Willyard This story first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. to receive it in your inbox every Thursday. Why salt marshes could help save Venice Venice, Italy, is suffering from a combination of subsidence—the city’s foundations slowly sinking into the mud on which they are built—and rising sea levels. In the worst-case scenario, it could disappear underwater by the year 2100. Scientists increasingly see the sinking city as a laboratory for environmental solutions. They’re investigating whether artificial mudflats in the Venetian lagoon can be turned back into the marshes that once thrived in this area and become a functioning part of the lagoon ecosystem again, which in turn, would help to safeguard the future of the city itself. . —Catherine Bennett The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 Teachers should assume that all their students are using AIIf ChatGPT can be used, it will be used, is the new rule of thumb. ( $)+ Teachers and educators are limbering up for a challenging academic year. ( $)+ ChatGPT is going to change education, not destroy it. () 2 Miami has appointed its own chief heat officerJane Gilbert is the first person in the world to hold the position. ()3 The beautiful complexity of the US radio spectrumColor coding and visualizing the nation’s radio frequencies is a significant undertaking. () 4 Donald Trump has returned to TwitterHe broke his two-year silence to share an imposing mug shot. () 5 When natural disasters strike, social media isn’t helping anymoreFacebook and Twitter have turned their backs on news. That’s making it much harder to get vital information to residents in danger. ( $)+ More than 1,000 people are still missing in Maui. ( $)+ How AI can actually be helpful in disaster response. () 6 News organizations are pushing back against ChatGPTThey appear to be blocking OpenAI’s web crawler from scraping their web pages. ()+ Open source AI isn’t all it’s cracked up to be. ( $)+ Wikipedia is doing just fine in the age of AI, thanks. ( $)+ We are hurtling toward a glitchy, spammy, scammy, AI-powered internet. () 7 The Amazon is starting to release its carbonWorryingly, parts of it are releasing more carbon than it absorbs. ()+ Tropical trees can’t photosynthesize in this heat. () 8 Eating plastic is a novel way to get rid of it In theory, microbes and insects could one day help us to break down tough polymers. ()+ How chemists are tackling the plastics problem. () 9 Beauty filters aren’t always about deception Sometimes, they’re about whimsy and simple fun. ( $)+ Hyper-realistic beauty filters are here to stay. () 10 It could get messy on the moon Space junk? No thank you. () Quote of the day “How do you ever truly understand the impact that you can have on someone’s life, you know?” —Charli D’Amelio, one of the internet’s best-known faces and TikTok’s breakout star, gets philosophical while considering her effect on her fans’ lives, she tells . The big story Inside Australia’s plan to survive bigger, badder bushfires April 2019Australia’s colonial history is dotted with fires so enormous they have their own names. The worst, Black Saturday, struck the state of Victoria on February 7, 2009. Fifteen separate fires scorched the state over just two days, killing 173 people. While Australia is notorious for spectacular blazes, it actually ranks below the United States, Indonesia, Canada, Portugal, and Spain when it comes to the economic damage caused by wildfires over the past century. That’s because while other nations argue about the best way to tackle the issue, the horrors of Black Saturday led Australia to drastically change its response—one of the biggest of which was also one of the most basic: taking another look at the way fire risk is rated. . —Bianca Nogrady We can still have nice things A place for comfort, fun and distraction in these weird times. (Got any ideas? Drop me a line or .) + How to make at home.+ I’d like to be, under the sea, checking out this real life .+ isn’t just surviving—it’s thriving.+ ? No thanks.+ It’s the most wonderful time of the year: London Zoo’s !
This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, . This week I covered . Two teams reported that they used brain-computer interfaces to help people who had lost their ability to speak regain their voice. Each group used a different kind of implant to capture electrical signals coming from the brain, and a computer to translate those signals into speech. The participant in the first study, Pat Bennett, lost her ability to speak as a result of ALS, also known as Lou Gehrig’s disease, a devastating illness that affects all the nerves of the body. Eventually it leads to near-total paralysis, so even though people can think and reason, they have almost no way to communicate. The other study involved a 47-year-old woman , who lost her voice as the result of a brain-stem stroke that left her paralyzed, unable to speak or type. Both these women can communicate without an implant. Bennett uses a computer to type. Johnson uses an eye-tracking device to select letters on a computer screen or, often with her husband’s help, a letterboard to spell out words. Both methods are slow, topping out at about 14 or 15 words a minute, but they work. That ability to communicate is what gave them the power to consent to participate in these trials. But how does consent work when communication is more difficult? For this week’s newsletter, let’s take a look at the ethics of communication and consent in scientific studies where the people who need these technologies most have the least ability to make their thoughts and feelings known. People who especially stand to benefit from this type of research are those with locked-in syndrome (LIS), who are conscious but almost entirely paralyzed, without the ability to move or speak. Some can communicate with eye-tracking devices, blinks, or muscle twitches. Jean-Dominique Bauby, for example, suffered a brain-stem stroke and could communicate only by blinking his left eye. Still, by mentally composing passages and then dictating them one letter at a time as an assistant recited the alphabet over and over again. That kind of communication is exhausting, however, for both the patient and the person assisting. It also robs these individuals of their privacy. “You have to completely depend on other people to ask you questions,” says Nick Ramsey, a neuroscientist at the University Medical Center Utrecht Brain Center in the Netherlands. “Whatever you want to do, it’s never private. There’s always someone else even when you want to communicate with your family.” A brain-computer interface that translates electrical signals from the brain into text or speech in real time would restore that privacy and give patients the chance to engage in conversation on their own terms. But allowing researchers to install a brain implant as part of a clinical trial is not a decision that should be taken lightly. Neurosurgery and implant placement come with a risk of seizures, bleeding, infections, and more. And in many trials, the implant is not designed to be permanent. That’s something Edward Chang, a neurosurgeon at UCSF, and his team try to make clear to potential participants. “This is a time-limited trial,” he says. “Participants are fully informed that after a number of years, the implant may be removed.” Making sure trial participants give informed consent is always important, but communication struggles make the process tricky. Ramsey’s group has been working with patients with ALS for years, and they’re one of a few teams working with patients who have extremely limited communication abilities. In 2016, they reported that they had developed a system that allowed a woman with ALS to use her mind to perform a mouse click. By the end of the study, she could select three letters per minute. “That person has used it for seven years, and she used it day and night to communicate when she couldn’t use any other means anymore,” he says. Now, Ramsey and his colleagues are working with other individuals in an attempt to translate brain activity into speech. The consent process is “a pretty elaborate procedure,” Ramsey says. First, the team explains the research in detail more than once. Then they ask a set of 20 simple yes-or-no questions to make sure the individual understands what the research will entail. There’s a limit to how many questions the potential participant can get wrong. All this happens in the presence of a legal guardian and an independent observer, and the whole procedure is recorded on video, Ramsey says. The process takes about four hours, and that doesn’t include the several weeks that patients have to mull over their decision. But people who are dependent on others for their care and communication needs are in a particularly vulnerable spot. In, researchers point out that the desire to consent might be influenced by how a patient’s decision would affect family members and caregivers. “If an implantable BCI trial or therapy offers the prospect of changing the character or degree of dependency on others, a [person with ALS] may feel obligated to pursue a BCI. Depending on the nature of this felt obligation, the voluntariness of the decision to have a BCI implanted may come into question.” Ramsey’s group doesn’t work with patients who are completely locked in, unable to communicate via any voluntary movement or noise. But he says there are potentially ways to get consent with the help of a functional MRI scanner. “They have to perform a simple task like reading words or counting backwards,” he says. “Simple tasks that we know work in everyone who is awake.” If the data shows the person isn’t performing those activities, the researchers assume that “either the person is not able to follow instructions or the person doesn’t want to participate and tells us so by deliberately not doing the task.” But that’s still theoretical. Putting brain implants in people who have the most extreme version of locked-in syndrome is generally frowned upon, Ramsey says. “There are clear legal and ethical rules for engaging people who can not express themselves in BCI research,” he says. “It is very hard to justify an implant in complete LIS, even if a legal guardian consents.” In , scientists reported that a man who was fully locked in could communicate with the help of a brain implant by changing his brain activity to match certain tones. But in this case, the man gave consent for the procedure before he entirely lost the ability to communicate. At least for now, people already in a locked-in state are stuck. A brain-computer interface might be their only hope of communicating, but they’re excluded from studies because they can’t convey their desire to join. As technology advances and therapies emerge, some of those people might regain their voice. That’s why finding ethical ways for them to provide informed consent is a goal worth pursuing. Indeed, it’s a moral imperative. Read more from Tech Review’s archive Researchers give people brain implants, but they also sometimes take them away, even if the research participant doesn’t want them to. Jessica Hamzelou and this. Entrepreneurs want brain implants for the masses, but many scientists want to make sure the implants get to those who need them most. Antonio Regalado in 2021. Tech is getting better and better at decoding the brain. Earlier this year, Jessica Hamzelou and about her book The Battle for Your Brain. From around the web A bit of good news heading into the fall. The FDA approved the new RSV vaccine for pregnant women, and they may be able to get the shot as soon as October (). People with long covid can experience health problems that last at least two years, according to a new study in Nature Medicine. The research, which relies on health records from the Department of Veterans Affairs, suggests that people who had even mild covid-19 early in the pandemic have a higher risk of lung problems, diabetes, and other health issues two years out than people who didn’t contract the virus then. (). This long and weird history of the new raft of weight-loss drugs includes a Gila monster. They work, but scientists still don’t know why. () Covid is ticking up this fall. But experts don’t see it settling into a seasonal pattern just yet. ()