Monday, April 1, 2024

What is Volt Typhoon? A cybersecurity expert explains the Chinese hackers targeting US critical infrastructure

U.S.-China antagonism is particularly acute in the realm of hacking and cybersecurity. AP Photo/Kiichiro Sato
Richard Forno, University of Maryland, Baltimore County

Volt Typhoon is a Chinese state-sponsored hacker group. The United States government and its primary global intelligence partners, known as the Five Eyes, issued a warning on March 19, 2024, about the group’s activity targeting critical infrastructure.

The warning echoes analyses by the cybersecurity community about Chinese state-sponsored hacking in recent years. As with many cyberattacks and attackers, Volt Typhoon has many aliases and also is known as Vanguard Panda, Bronze Silhouette, Dev-0391, UNC3236, Voltzite and Insidious Taurus. Following these latest warnings, China again denied that it engages in offensive cyberespionage.

Volt Typhoon has compromised thousands of devices around the world since it was publicly identified by security analysts at Microsoft in May 2023. However, some analysts in both the government and cybersecurity community believe the group has been targeting infrastructure since mid-2021, and possibly much longer.

Volt Typhoon uses malicious software that penetrates internet-connected systems by exploiting vulnerabilities such as weak administrator passwords, factory default logins and devices that haven’t been updated regularly. The hackers have targeted communications, energy, transportation, water and wastewater systems in the U.S. and its territories, such as Guam.

In many ways, Volt Typhoon functions similarly to traditional botnet operators that have plagued the internet for decades. It takes control of vulnerable internet devices such as routers and security cameras to hide and establish a beachhead in advance of using that system to launch future attacks.

Operating this way makes it difficult for cybersecurity defenders to accurately identify the source of an attack. Worse, defenders could accidentally retaliate against a third party who is unaware that they are caught up in Volt Typhoon’s botnet.

Why Volt Typhoon matters

Disrupting critical infrastructure has the potential to cause economic harm around the world. Volt Typhoon’s operation also poses a threat to the U.S. military by potentially disrupting power and water to military facilities and critical supply chains.

FBI Director Christopher Wray testified at a congressional hearing on Jan. 31, 2024, about Chinese hackers targeting U.S. critical infrastructure.

Microsoft’s 2023 report noted that Volt Typhoon could “disrupt critical communications infrastructure between the United States and Asia region during future crises.” The March 2024 report, published in the U.S. by the Cybersecurity and Infrastructure Security Agency, likewise warned that the botnet could lead to “disruption or destruction of critical services in the event of increased geopolitical tensions and/or military conflict with the United States and its allies.”

Volt Typhoon’s existence and the escalating tensions between China and the U.S., particularly over Taiwan, underscore the latest connection between global events and cybersecurity.

Defending against Volt Typhoon

The FBI reported on Jan. 31, 2024, that it had disrupted Volt Typhoon’s operations by removing the group’s malware from hundreds of small office/home office routers. However, the U.S. is still determining the extent of the group’s infiltration of America’s critical infrastructure.

On March 25, 2024, the U.S. and U.K. announced that they had imposed sanctions on Chinese hackers involved in compromising their infrastructures. And other countries, including New Zealand, have revealed cyberattacks traced back to China in recent years.

All organizations, especially infrastructure providers, must practice time-tested safe computing centered on preparation, detection and response. They must ensure that their information systems and smart devices are properly configured and patched, and that they can log activity. And they should identify and replace any devices at the edges of their networks, such as routers and firewalls, that no longer are supported by their vendor.

Organizations can also implement strong user-authentication measures such as multifactor authentication to make it more difficult for attackers like Volt Typhoon to compromise systems and devices. More broadly, the comprehensive NIST Cybersecurity Framework can help these organizations develop stronger cybersecurity postures to defend against Volt Typhoon and other attackers.

Individuals, too, can take steps to protect themselves and their employers by ensuring their devices are properly updated, enabling multifactor authentication, never reusing passwords, and otherwise remaining vigilant to suspicious activity on their accounts, devices and networks.

For cybersecurity practitioners and society generally, attacks like Volt Typhoon can represent an enormous geopolitical cybersecurity threat. They are a reminder for everyone to monitor what’s going on in the world and consider how current events can affect the confidentiality, integrity and availability of all things digital.

Richard Forno, Principal Lecturer in Computer Science and Electrical Engineering, University of Maryland, Baltimore County

This article is republished from The Conversation under a Creative Commons license. 

Saturday, March 2, 2024

We’ve been here before: AI promised humanlike machines – in 1958

Frank Rosenblatt with the Mark I Perceptron, the first artificial neural network computer, unveiled in 1958. National Museum of the U.S. Navy/Flickr
Danielle Williams, Arts & Sciences at Washington University in St. Louis

A roomsize computer equipped with a new type of circuitry, the Perceptron, was introduced to the world in 1958 in a brief news story buried deep in The New York Times. The story cited the U.S. Navy as saying that the Perceptron would lead to machines that “will be able to walk, talk, see, write, reproduce itself and be conscious of its existence.”

More than six decades later, similar claims are being made about current artificial intelligence. So, what’s changed in the intervening years? In some ways, not much.

The field of artificial intelligence has been running through a boom-and-bust cycle since its early days. Now, as the field is in yet another boom, many proponents of the technology seem to have forgotten the failures of the past – and the reasons for them. While optimism drives progress, it’s worth paying attention to the history.

The Perceptron, invented by Frank Rosenblatt, arguably laid the foundations for AI. The electronic analog computer was a learning machine designed to predict whether an image belonged in one of two categories. This revolutionary machine was filled with wires that physically connected different components together. Modern day artificial neural networks that underpin familiar AI like ChatGPT and DALL-E are software versions of the Perceptron, except with substantially more layers, nodes and connections.

Much like modern-day machine learning, if the Perceptron returned the wrong answer, it would alter its connections so that it could make a better prediction of what comes next the next time around. Familiar modern AI systems work in much the same way. Using a prediction-based format, large language models, or LLMs, are able to produce impressive long-form text-based responses and associate images with text to produce new images based on prompts. These systems get better and better as they interact more with users.

A chart with a horizontal row of nine colored blocks through the center and numerous black vertical lines connecting the blocks with sections of text above and below the blocks
A timeline of the history of AI starting in the 1940s. Click the author’s name here for a PDF of this poster. Danielle J. Williams, CC BY-ND

AI boom and bust

In the decade or so after Rosenblatt unveiled the Mark I Perceptron, experts like Marvin Minsky claimed that the world would “have a machine with the general intelligence of an average human being” by the mid- to late-1970s. But despite some success, humanlike intelligence was nowhere to be found.

It quickly became apparent that the AI systems knew nothing about their subject matter. Without the appropriate background and contextual knowledge, it’s nearly impossible to accurately resolve ambiguities present in everyday language – a task humans perform effortlessly. The first AI “winter,” or period of disillusionment, hit in 1974 following the perceived failure of the Perceptron.

However, by 1980, AI was back in business, and the first official AI boom was in full swing. There were new expert systems, AIs designed to solve problems in specific areas of knowledge, that could identify objects and diagnose diseases from observable data. There were programs that could make complex inferences from simple stories, the first driverless car was ready to hit the road, and robots that could read and play music were playing for live audiences.

But it wasn’t long before the same problems stifled excitement once again. In 1987, the second AI winter hit. Expert systems were failing because they couldn’t handle novel information.

The 1990s changed the way experts approached problems in AI. Although the eventual thaw of the second winter didn’t lead to an official boom, AI underwent substantial changes. Researchers were tackling the problem of knowledge acquisition with data-driven approaches to machine learning that changed how AI acquired knowledge.

This time also marked a return to the neural-network-style perceptron, but this version was far more complex, dynamic and, most importantly, digital. The return to the neural network, along with the invention of the web browser and an increase in computing power, made it easier to collect images, mine for data and distribute datasets for machine learning tasks.

Familiar refrains

Fast forward to today and confidence in AI progress has begun once again to echo promises made nearly 60 years ago. The term “artificial general intelligence” is used to describe the activities of LLMs like those powering AI chatbots like ChatGPT. Artificial general intelligence, or AGI, describes a machine that has intelligence equal to humans, meaning the machine would be self-aware, able to solve problems, learn, plan for the future and possibly be conscious.

Just as Rosenblatt thought his Perceptron was a foundation for a conscious, humanlike machine, so do some contemporary AI theorists about today’s artificial neural networks. In 2023, Microsoft published a paper saying that “GPT-4’s performance is strikingly close to human-level performance.”

Three men sit in chairs on a stage
Executives at big tech companies, including Meta, Google and OpenAI, have set their sights on developing human-level AI. AP Photo/Eric Risberg

But before claiming that LLMs are exhibiting human-level intelligence, it might help to reflect on the cyclical nature of AI progress. Many of the same problems that haunted earlier iterations of AI are still present today. The difference is how those problems manifest.

For example, the knowledge problem persists to this day. ChatGPT continually struggles to respond to idioms, metaphors, rhetorical questions and sarcasm – unique forms of language that go beyond grammatical connections and instead require inferring the meaning of the words based on context.

Artificial neural networks can, with impressive accuracy, pick out objects in complex scenes. But give an AI a picture of a school bus lying on its side and it will very confidently say it’s a snowplow 97% of the time.

Lessons to heed

In fact, it turns out that AI is quite easy to fool in ways that humans would immediately identify. I think it’s a consideration worth taking seriously in light of how things have gone in the past.

The AI of today looks quite different than AI once did, but the problems of the past remain. As the saying goes: History may not repeat itself, but it often rhymes.The Conversation

Danielle Williams, Postdoctoral Fellow in Philosophy of Science, Arts & Sciences at Washington University in St. Louis

This article is republished from The Conversation under a Creative Commons license. 

Tuesday, February 27, 2024

Why federal efforts to protect schools from cybersecurity threats fall short

The cost of safeguarding America’s schools from cybercriminals could run as high as $5 billion. boonchai wedmakawand via Getty Images
Nir Kshetri, University of North Carolina – Greensboro

In August 2023, the White House announced a plan to bolster cybersecurity in K-12 schools – and with good reason. Between 2018 and mid-September 2023, there were 386 recorded cyberattacks in the U.S. education sector and cost those schools $35.1 billion. K-12 schools were the primary target.

The new White House initiative includes a collaboration with federal agencies that have cybersecurity expertise, such as the Cybersecurity and Infrastructure Security Agency, the Federal Communications Commission and the FBI. Technology firms like Amazon, Google, Cloudflare, PowerSchool and D2L have pledged to support the initiative with training and resources.

While the steps taken by the White House are positive, as someone who teaches and conducts research about cybersecurity, I don’t believe the proposed measures are enough to protect schools from cyberthreats. Here are four reasons why:

1. Schools face more cyberthreats than other sectors

Cyberattacks on K-12 schools increased more than eightfold in 2022. Educational institutions draw the interest of cybercriminals due to their weak cybersecurity. This weak cybersecurity provides an opportunity to access networks containing highly sensitive information.

Criminals can exploit students’ information to apply for fraudulent government benefits and open unauthorized bank accounts and credit cards. In testimony to the House Ways and Means Subcommittee on Social Security, a Federal Trade Commission official noted that children’s Social Security numbers are uniquely valuable because they have no credit history and can be paired with any name and date of birth. Over 10% of children enrolled in an identity protection service were discovered to have loans.

Cybercriminals can also use such information to launch ransomware attacks against schools. Ransomware attacks involve locking up a computer or its files and demanding payment for their release. The ransomware victimization rate in the education sector surpasses that of all other surveyed industries, including health care, technology, financial services and manufacturing.

Schools are especially vulnerable to cyberthreats because more and more schools are lending electronic devices to students. Criminals have been found to hide malware within online textbooks and essays to dupe students into downloading it. Should students or teachers inadvertently download malware onto school-owned devices, criminals can launch an attack on the entire school network.

When faced with such an attack, schools can be desperate to comply with criminals’ demands to ensure students’ access to learning.

2. Schools lack cybersecurity personnel

K-12 schools’ poor cybersecurity performance can be attributed, in part, to lack of staff. About two-thirds of school districts lack a full-time cybersecurity position. Those with cybersecurity staff often don’t have the budget for a chief information security officer to oversee and manage the district’s strategy. Often, the IT director takes on this role, but they have a broader responsibility for IT operations without a specific emphasis on security.

3. Schools lack cybersecurity skills

The lack of cybersecurity skills among existing staff hinders the development of strong cybersecurity programs.

Only 10% of educators say that they have a deep understanding of cybersecurity. The majority of students say that they have minimal or no knowledge about cybersecurity. Cybersecurity awareness tends to be even lower in higher-poverty districts, where students have less access to cybersecurity education.

The Cybersecurity and Infrastructure Security Agency plans to provide cybersecurity training to an additional 300 K-12 schools, school districts and other organizations involved in K-12 education in the forthcoming school year. With 130,930 K-12 public schools and 13,187 public school districts in the U.S., CISA’s plan serves only a tiny fraction of them.

4. Inadequate funding

The FCC has proposed a pilot program that would allocate $200 million over three years to boost cyberdefenses. With an annual budget of $66.6 million, this falls short of covering the entirety of cybersecurity costs, given that it will cost an estimated $5 billion to adequately secure the nation’s K-12 schools.

The costs encompass hardware and software procurement, consulting, testing, and hiring data protection experts to combat cyberattacks. Frequent training is also needed to respond to evolving threats. As technology advances, cybercriminals adapt their methods to exploit vulnerabilities in digital systems. Teachers must be ready to address such risks.

Costs are sizable

How much should schools and districts be spending on cybersecurity? Other sectors can serve as a model to guide K-12 schools.

One way to determine cybersecurity funding is by the number of employees. In the financial services industry, for example, these costs range from $1,300 to $3,000 per full-time employee. There are over 4 million teachers in the United States. Setting cybersecurity spending at $1,300 per teacher – the low end of what financial firms spend – would require K-12 schools to spend a total of $5 billion.

An alternate approach is to determine cybersecurity funding relative to IT spending. On average, U.S. enterprises are estimated to spend 10% of their IT budgets on cybersecurity. Since K-12 schools were estimated to spend more than $50 billion on IT in the 2020-21 fiscal year, allocating 10% to cybersecurity would also require them to spend $5 billion.

Another approach is to allocate cybersecurity spending as a proportion of the total budget. In 2019, cybersecurity spending represented 0.3% of the federal budget. Federal, state and local governments collectively allocate $810 billion for K-12 education. If schools set cybersecurity spending at 0.3%, following the example of federal agencies, that would require an annual budget of $2.4 billion.

By contrast, a fifth of schools dedicate less than 1% of their IT budgets – not their entire budgets – to cybersecurity. In 12% of school districts, there is no allocation for cybersecurity at all.The Conversation

Nir Kshetri, Professor of Management, University of North Carolina – Greensboro

This article is republished from The Conversation under a Creative Commons license. 

Saturday, February 3, 2024

Animal CSI: Forensics comes for the wildlife trade


Scientists are using the latest in DNA fingerprinting to combat the multibillion-dollar business of trafficking plants and animals

Campbell’s death was as gruesome as the killers’ previous nine known crimes. Found mutilated in a pool of blood at his home in the district of Albany, South Africa, in June 2016, Campbell had been drugged but was likely in pain before he died from his injuries.

Campbell was a white rhinoceros living on a private reserve, and his killing would be the last hurrah of the now notorious Ndlovu Gang. The three poachers were arrested days later at the Makana Resort in Grahamstown, South Africa, caught red-handed with a bow saw, a tranquilizer dart gun and a freshly removed rhino horn. A variety of evidence, including cellphone records and ballistics analysis of the dart gun, would link them to the crime. But a key element was Campbell’s DNA, found in the horn and on the still-bloody saw.

Among the scientific techniques used to combat poaching and wildlife trafficking, DNA is king, says Cindy Harper, a veterinary geneticist at the University of Pretoria. Its application in animal investigations is small-scale but growing in a field with a huge volume of crime: The value of the illegal wildlife trade is as much as $20 billion per year, Interpol estimates.

“It’s not just a few people swapping animals around,” says Greta Frankham, a wildlife forensic scientist at the Australian Center for Wildlife Genomics in Sydney. “It’s got links to organized crime; it is an enormous amount of turnover on the black market.”

The problem is global. In the United States, the crime might be the illegal hunting of deer or black bears, the importing of protected-animal parts for food or medicinal use, the harvesting of protected cacti, or the trafficking of ivory trinkets. In Africa or Asia, it might be the poaching of pangolins, the globe’s most trafficked mammal for both its meat and its scales, which are used in traditional medicines and magic practices. In Australia, it might be the collection or export of the continent’s unique wildlife for the pet trade.

Techniques used in wildlife forensics are often direct descendants of tools from human crime investigations, and in recent years scientists have adapted and tailored them for use in animals. Harper and colleagues, for example, learned to extract DNA from rhinoceros horns, a task once thought impossible. And by building DNA databases — akin to the FBI’s CODIS database used for human crimes — forensic geneticists can identify a species and more: They might pinpoint a specimen’s geographic origin, family group, or even, in some cases, link a specific animal or animal part to a crime scene.

Adapting this science to animals has contributed to major crime busts, such as the 2021 arrests in an international poaching and wildlife trafficking ring. And scientists are further refining their techniques in the hopes of identifying more challenging evidence samples, such as hides that have been tanned or otherwise degraded.

“Wildlife trafficking investigations are difficult,” says Robert Hammer, a Seattle-based special agent-in-charge with Homeland Security Investigations, the Department of Homeland Security’s arm for investigating diverse crimes, including those involving smuggling, drugs and gang activity. He and his colleagues, he says, rely on DNA and other forensic evidence “to tell the stories of the animals that have been taken.”

First, identify

Wildlife forensics generally starts with a sample sent to a specialized lab by investigators like Hammer. Whereas people-crime investigators generally want to know “Who is it?” wildlife specialists are more often asked “What is this?” — as in, “What species?” That question could apply to anything from shark fins to wood to bear bile, a liver secretion used in traditional medicines.

“We get asked questions about everything from a live animal to a part or a product,” says Barry Baker, deputy laboratory director at the US National Fish and Wildlife Forensics Laboratory in Ashland, Oregon.

Investigators might also ask whether an animal photographed at an airport is a species protected by the Convention on International Trade in Endangered Species of Wild Fauna and Flora, or CITES, in which case import or export is illegal without a permit. They might want to know whether meat brought into the US is from a protected species, such as a nonhuman primate. Or they might want to know if a carved knickknack is real ivory or fake, a difference special lighting can reveal.

While some identifications can be made visually, DNA or other chemical analyses may be required, especially when only part of the creature is available. To identify species, experts turn to the DNA in mitochondria, the cellular energy factories that populate nearly every cell, usually in multiple copies. DNA sequences therein are similar in all animals of the same species, but different between species. By reading those genes and comparing them to sequences in a database such as the Barcode of Life, forensic geneticists can identify a species.

To go further to try to link a specimen to a specific, individual animal, forensic geneticists use the same technique that’s used in human DNA forensics, in this case relying on the majority of DNA contained in the cell’s nucleus. Every genome contains repetitive sequences called microsatellites that vary in length from individual to individual. Measuring several microsatellites creates a DNA fingerprint that is rare, if not unique. In addition, some more advanced techniques use single-letter variations in DNA sequences for fingerprinting.

Comparing the DNA of two samples allows scientists to make a potential match, but it isn’t a clincher: That requires a database of DNA fingerprints from other members of the species to calculate how unlikely it is — say, a one-in-a-million chance — that the two samples came from different individuals. Depending on the species’ genetic diversity and its geographic distribution, a valid database could have as few as 50 individuals or it could require many more, says Ashley Spicer, a wildlife forensic scientist with the California Department of Fish and Wildlife in Sacramento. Such databases don’t exist for all animals and, indeed, obtaining DNA samples from even as few as 50 animals could be a challenge for rare or protected species, Spicer notes.

Investigators use these techniques in diverse ways: An animal may be the victim of a crime, the perpetrator or a witness. And if, say, dogs are used to hunt protected animals, investigators could find themselves with animal evidence related to both victim and suspect.

For witnesses, consider the case of a white cat named Snowball. When a woman disappeared in Richmond, on Canada’s Prince Edward Island, in 1994, a bloodstained leather jacket with 27 white cat hairs in the lining was found near her home. Her body was found in a shallow grave in 1995, and the prime suspect was her estranged common-law husband, who lived with his parents and Snowball, their pet. DNA from the root of one of the jacket hairs matched Snowball’s blood. Though the feline never took the stand, the cat’s evidence spoke volumes, helping to clinch a murder conviction in 1996.

$[$PB_DROPZONE,id:knowable-newsletter-article-promo$]$

A database for rhinos

The same kind of specific linking of individual animal to physical evidence was also a key element in the case of Campbell the white rhino. Rhino horn is prized: It’s used in traditional Chinese medicine and modern variants of the practice to treat conditions from colds to hangovers to cancer, and is also made into ornaments such as cups and beads. At the time of Campbell’s death, his horn, weighing north of 10 kilograms, was probably worth more than $600,000 — more than its weight in gold — on the black market.

The DNA forensics that helped nab the Ndlovu Gang started with experiments in the early 2000s, when rhino poaching was on the rise. Scientists once thought rhino horns were nothing but densely packed hair, lacking cells that would include DNA, but a 2006 study showed that cells, too, are present. A few years later, Harper’s group reported that even though these cells were dead, they contained viable DNA, and the researchers figured out how to access it by drilling into the horn’s core.

In 2010, a crime investigator from South Africa’s Kruger National Park dropped by Harper’s lab. He was so excited by the potential of her discovery to combat poaching that he ripped a poster describing her results off the wall, rolled it up and took it away with him. Soon after, Harper launched the Rhinoceros DNA Index System, or RhODIS. (The name is a play on the FBI’s CODIS database, for Combined DNA Index System.)

Today, thanks to 2012 legislation from the South African government, anyone in that nation who handles a rhino or its horn — for example, when dehorning animals for the rhinos’ own protection — must send Harper’s team a sample. RhODIS now contains about 100,000 DNA fingerprints, based on 23 microsatellites, from African rhinoceroses both black and white, alive and long dead, including most of the rhinos in South Africa and Namibia, as well as some from other nations.

RhODIS has assisted with numerous investigations, says Rod Potter, a private consultant and wildlife crime investigator who has worked with the South African Police Service for more than four decades. In one case, he recalls, investigators found a suspect with a horn in his possession and used RhODIS to identify the animal before the owner even knew the rhino was dead.

In Campbell’s case, in 2019 the three poachers were convicted, to cheers from observers in the courtroom, of charges related to 10 incidents. Each gang member was sentenced to 25 years in prison.

Today, as rhino poaching has rebounded after a pandemic-induced lull, the RhODIS database remains important. And even when RhODIS can’t link evidence to a specific animal, Potter says, the genetics are often enough to point investigators to the creature’s approximate geographic origin, because genetic markers vary by location and population. And that can help illuminate illegal trade routes.

Elephants also benefit

DNA can make a big impact on investigations into elephant poaching, too. Researchers at the University of Washington in Seattle, for example, measured DNA microsatellites from roving African elephants as well as seized ivory, then built a database and a geographical map of where different genetic markers occur among elephants. The map helps to determine the geographic source of poached, trafficked tusks seized by law enforcement officials.

Elephants travel in matriarchal herds, and DNA markers also run in families, allowing the researchers to determine the relatedness of different tusks, be they from parents, offspring, siblings or half-siblings. When they find tusks from the same elephant or clan in different shipments with a common port, it suggests that the shipments were sent from the same criminal network — which is useful information for law enforcement officials.

This kind of information came in handy during a recent international investigation, called Operation Kuluna, led by Hammer and colleagues at Homeland Security Investigations. It started with a sting: Undercover US investigators purchased African ivory that was advertised online. In 2020, the team spent $14,500 on 49 pounds of elephant ivory that was cut up, painted black, mixed with ebony and shipped to the United States with the label “wood.” The following year, the investigators purchased about five pounds of rhino horn for $18,000. The undercover buyers then expressed interest in lots more inventory, including additional ivory, rhino horns and pangolin scales.

The promise of such a huge score lured two sellers from the Democratic Republic of the Congo (DRC) to come to the United States, expecting to seal the $3.5 million deal. Instead, they were arrested near Seattle and eventually sentenced for their crimes. But the pair were not working alone: Operations like these are complex, says Hammer, “and behind complex conspiracies come money, organizers.” And so the investigators took advantage of elephant genetic and clan data which helped to link the tusks to other seizures. It was like playing “Six Degrees of Kevin Bacon,” says Hammer.

Shortly after the US arrests, Hammer’s counterparts in Africa raided warehouses in the DRC to seize more than 2,000 pounds of ivory and 75 pounds of pangolin scales, worth more than $1 million.

Despite these successes, wildlife forensics remains a small field: The Society for Wildlife Forensic Science has fewer than 200 members in more than 20 countries. And while DNA analysis is powerful, the ability to identify species or individuals is only as good as the genetic databases researchers can compare their samples to. In addition, many samples contain degraded DNA that simply can’t be analyzed — at least, not yet.

Today, in fact, a substantial portion of wildlife trade crimes may go unprosecuted because researchers don’t know what they’re looking at. The situation leaves scientists stymied by that very basic question: “What is this?”

For example, forensic scientists can be flummoxed by animal parts that have been heavily processed. Cooked meat is generally traceable; leather is not. “We have literally never been able to get a DNA sequence out of a tanned product,” says Harper, who wrote about the forensics of poaching in the 2023 Annual Review of Animal Biosciences. In time, that may change: Several researchers are working to improve identification of degraded samples. They might work out ways to do so based on the proteins therein, says Spicer, since these are more resistant than DNA is to destruction by heat or chemistry.

Success, stresses Spicer, will require the cooperation of wildlife forensic scientists around the world. “Anywhere that somebody can get a profit or exploit an animal, they’re going to do it — it happens in every single country,” she says. “And so it’s really essential that we all work together.”

Saturday, January 27, 2024

We urgently need data for equitable personalized medicine

OPINION: A massive bias in medical studies toward men of European origin means that genetic variants in understudied populations don’t get the focus they deserve

Warfarin is a powerful blood thinner and a leading drug for cardiovascular disease worldwide. But in South Africa, it is among the top four drug varieties leading to hospitalization from adverse drug reactions. It’s reasonable to suppose that the drug has similar problematic effects farther across sub-Saharan Africa, though the national data needed to show it are lacking.

The fact that warfarin is riskier in some populations than others isn’t a surprise. Different geographic regions tend to host people with slightly different genetic makeups, and sometimes those genetic differences lead to radically different reactions to drugs. For certain people, a higher dosage of warfarin is fine; for others, it’s dangerous. Researchers have known this for decades.

The problem is that the majority of medical research, including genetic research, is still done mainly on one subset of the world’s population: men of Northern European origin. This means that negative drug-gene interactions in other, less well-studied populations can fly beneath the radar. In the case of warfarin, one study concluded that using someone’s genetic information to help guide their drug dosing would benefit 18 percent to 24 percent of people categorized as white, but have no benefit for people identified as Black, Chinese or Japanese.

While that study is a decade old, the general point still holds true: A bias in our current understanding of the genetics of different populations means that some people would be helped far more than others by genetically informed personalized medicine.

As a bioinformatician, I am now focusing my attention on gathering the statistics to show just how biased medical research data are. There are problems across the board, ranging from which research questions get asked in the first place, to who participates in clinical trials, to who gets their genomes sequenced. The world is moving toward “precision medicine,” where any individual can have their DNA analyzed and that information can be used to help prescribe the right drugs in the right dosages. But this won’t work if a person’s genetic variants have never been identified or studied in the first place.

It’s astonishing how powerful our genetics can be in mediating medicines. Take the gene CYP2D6, which is known to play a vital role in how fast humans metabolize 25 percent of all the pharmaceuticals on the market. If you have a genetic variant of CYP2D6 that makes you metabolize drugs more quickly, or less quickly, it can have a huge impact on how well those drugs work and the dangers you face from taking them. Codeine was banned from all of Ethiopia in 2015, for example, because a high proportion of people in the country (perhaps 30 percent) have a genetic variant of CYP2D6 that makes them quickly metabolize that drug into morphine, making it more likely to cause respiratory distress and even death.

Researchers have identified over a hundred different CYP2D6 variants and there are likely many, many more out there that we don’t yet know the impacts of — especially in understudied populations.

Back in 2016, researchers published an important article looking at more than 2,500 genome-wide association studies done up to that time. These are studies that scan the genomes of thousands of people to find variants associated with disease traits. What the researchers found was disturbing: While there had been some improvement in diversity since 2009, still 81 percent of the nearly 35 million samples in those studies came from people of European descent.

You might expect that, since everyone knows this is a problem, it would have gotten much better over recent years. It hasn’t. In 2021, another study of genome-wide association studies showed that the European-origin proportion had increased, not decreased, from 81 percent to 86 percent.

It’s not just genome-wide studies that have this issue. Direct-to-consumer genetic sequencing services like 23andMe are also skewed: One analysis suggests that 95 percent of the participants have predicted European ancestry, compared to just 2 percent African. And in PharmGKB, one of the world’s leading databases of drug-gene interactions, 64 percent of the data come from people of European ancestry, though this group makes up only 16 percent of the global population. Indigenous Americans account for the smallest amount of the data (just 0.1 percent). But when taking global population into account, it is Central and South Asian people who are least well represented, with only 2 percent of the data but 26 percent of the global population.

People of African descent have the greatest genetic diversity on the planet (because humanity originated in Africa), and so arguably they deserve the greatest amount of study. But this is hardly the situation. This population makes up just 4 percent of the PharmGKB dataset, for example.

Geographic ancestry isn’t the only factor that’s biased. Women make up only 38 percent of participants in studies of drug effectiveness and pharmacokinetics, for example. Because of gender bias all along the line, women experience adverse drug reactions nearly twice as often as men. And this doesn’t even scratch the surface of people with genetic conditions — like my son who has Down syndrome — or other disabilities.

There are some good efforts working to correct these problems. On 18 October 2023, researchers announced plans to create one of the largest-yet databases of genomes exclusively from people with African ancestry. The project aims to recruit at least 500,000 volunteers (for comparison, tens of millions of people globally have had their genomes sequenced to date). This is a great effort; more should follow suit.

Everyone stands to gain from more diverse work. Right now, one clue that researchers use to help determine whether a genetic mutation might be linked to disease (or not) is whether that mutation is rare (or not); if a variant is extremely uncommon, this is one hint that it might be pathogenic (since most people don’t have a given disease). But this could be sending researchers chasing after red herrings. One study published in March 2023, for example, performed whole-genome sequencing on 180 people from 12 indigenous African populations, and found that of 154 mutations labeled “pathogenic” or “likely pathogenic” in a well-known database, 44 were at least five times more frequent in at least one of these African populations. This suggests that those mutations might be benign after all.

The International Covenant on Economic, Social and Cultural Rights, adopted by the United Nations General Assembly on 16 December 1966, recognizes everyone’s rights to enjoy the benefits of scientific progress. But that is not happening yet. We need to ramp up representation in genetic and medical studies to ensure fair treatment for all.

Tuesday, January 16, 2024

What will it take to recycle millions of worn-out EV batteries?

In Nevada and other US states, entrepreneurs are anticipating the coming boom in retired lithium-ion batteries from electric cars and hoping to create a market for recycled minerals

Thirty miles east of Reno, Nevada, past dusty hills patched with muted blue sage and the occasional injury-lawyer billboard, a large concrete structure rises prominently in the desert landscape. When fully constructed, it will be a pilot for a business that entrepreneurs envision as a major facet of America’s future green economy: lithium-ion battery recycling.

Construction manager Chuck Leber points out bays where trucks will drop off batteries, and deep drains in rooms to catch leaking chemicals. He shows me a two-foot concrete slab under the building — a hefty foundation so that workers can move equipment and adapt the plant while refining the recycling process. Later this year, the first batteries will pass through the facility; the goal is to ramp up to handle 20,000 metric tons of batteries a year.

The 60,000-square-foot plant owned by the American Battery Technology Company is an optimistic endeavor to address the inconvenient environmental downside of electric vehicles — their resource-demanding battery packs. It is also a test of whether business leaders can live up to their promises to help build a circular economy: one in which materials are reused indefinitely, minimizing the need to continually pry more minerals from the earth.

Since 2019, electric vehicles — EVs — have more than tripled their share of the auto market, and 6.6 million were sold globally in 2021. Facing pressure and sometimes outright regulation to reduce their climate footprint, many automakers have pledged to stop sales of new combustion-engine vehicles by 2040. “In five years, we are aiming for having tens of millions of electric vehicles on the roads,” says Alexandre Milovanoff of the sustainability consulting company Anthesis Group, who has studied how an EV transition would affect America’s electrical grid. “We’re talking about a market that is exploding.”

To feed the rising EV battery demand, the US government and companies are investing in domestic mining for the needed minerals — including nickel, manganese, cobalt and lithium (the price of which more than quadrupled in 2021). But they are also looking for ways to reduce dependence on newly mined materials through recycling. In March 2022, President Joe Biden invoked the Defense Production Act to bolster supplies of the in-demand minerals, directing domestic investments both in mining and in other forms of recovery.

Researchers say that figuring out recycling could help to avoid the environmental risks of more mining and a buildup of hazardous battery waste — but reprocessing these batteries and refining the metals they contain for reuse is difficult and costly, and many remain skeptical of how truly circular that supply chain can ever be. “An electric vehicle battery is a very complex piece of technology with a lot of different components in it — so a recycling facility is going to be very complicated,” says Michael McKibben, a geologist at the University of California, Riverside. “In the long run, that’s going to be important, but in the short run, it’s got a ways to go.”

Sourcing specific minerals

To power a car, electrons in the battery move from the negative electrode, the anode, to the positive electrode, the cathode. Typically, the anode is made of copper and graphite, while the cathode consists of a class of compounds called lithium metal oxides — ones that contain lithium plus other metals such as cobalt, manganese and nickel.

All of these metals must be sourced — and recycling alone cannot yet meet market needs. Though the US has numerous copper mines (and obtains a sizable chunk of copper from scrap recycling), nearly all of the other metals in lithium-ion batteries come from mines in other countries. More than 80 percent of global lithium comes from Chile, Australia and China, while more than 60 percent of cobalt comes from the Democratic Republic of Congo.

This overseas reliance can come with costs. Much of the lithium mined today, for example, comes from the fragile Atacama Desert in Chile, where the metal is recovered by evaporating salty brine in massive ponds. It’s cost-effective, but researchers and local communities have raised concerns over toxic wastes and the depletion and contamination of water supplies; by one estimate, it takes 500,000 gallons of water — largely lost to evaporation — to concentrate a single metric ton of lithium. Sourcing battery metals also has been connected with human rights abuses in some locations, such as cobalt mining in the Democratic Republic of Congo, where companies have been accused of using child labor, paying workers poorly and failing to provide basic safety equipment.

There’s also a greenhouse-gas price to pay for the long-distance transport of materials: Before anyone even gets in the driver’s seat of a brand-new electric vehicle, some EV battery materials have already traveled tens of thousands of miles. (Still, electric vehicles — with few exceptions — have a smaller carbon footprint than gas-powered cars, and electrifying transportation is key to slashing carbon emissions to stave off disastrous levels of climate change.)

For now, mining remains a need, and researchers think it’s possible to reduce its impacts through domestic operations and new technologies. But they say it is also crucial to ramp up the technology and business models for recycling. After thousands of charges and discharges, cells of lithium-ion batteries dry up and cracks form in the cathode materials, until the battery can neither hold nor deliver enough charge. Millions of EV batteries will soon be reaching this point, and if they’re deposited at the dump, they can leach toxic chemicals and even catch fire. A few US companies collect batteries for recycling, but this capacity lags behind the volume of spent lithium-ion batteries from cars, phones, computers and other electronics. In 2019, US recycling companies diverted from landfills about 15 percent of all retired lithium-ion batteries.

The challenges of recycling

Profitability is a major barrier. Though lithium-ion batteries contain valuable metals, they are challenging to take apart and the minerals are hard to extract from the tight layers of inorganic and organic compounds. By one estimate, the cost of recycled lithium is five times that of virgin lithium from brine-mining. Compare that with lead-acid batteries in combustion cars, which are almost entirely diverted from landfills and recycled. “It’s easy as pie to recycle a lead-acid battery in comparison to a lithium-ion battery,” says geologist Jens Gutzmer, director of the Helmholtz Institute Freiberg for Resource Technology in Germany and coauthor of an article about building a circular metals economy in the Annual Review of Materials Research.

Another problem is that today’s main lithium-ion battery recycling processes are also not particularly efficient. A process used by many recyclers, pyrometallurgy, involves melting down the batteries and burning off plastic separators to extract the coveted metals. Pyrometallurgy is energy-intensive, emits toxic gases and can’t recover some valuable minerals, including lithium, at all.

With growing EV sales, a massive wave of dead electric car batteries will soon exacerbate recycling problems. By 2028, researchers predict that the world will have more than a million metric tons of them to deal with. “I like to compare it to the plastic industry — we have a lot of plastic waste, and people are not really dealing with that — and I’m just worried that this will be happening also with batteries,” says Laura Lander, a materials scientist at King’s College London. And yet if it could be made profitable, scaling up EV battery recycling could, by 2040, reduce the global need for newly mined lithium by 25 percent, and for cobalt and nickel by 35 percent, according to one report prepared by the Institute for Sustainable Futures at the University of Technology Sydney in Australia.

Efforts to improve battery recycling are underway at the Department of Energy’s ReCell Center, a collaboration with national labs and universities launched in 2019. There, researchers are working to scale up what’s called “direct recycling.” This method aims to recapture the cathode material — a carefully manufactured powder — without melting or dissolving the whole battery and destroying the powder in the process. “They put a lot of time and effort into making these beautiful, spherical particles that are about 10 microns in diameter with the right crystal structure,” says Albert Lipson, a materials scientist at Argonne National Laboratory and the ReCell Center.

Lipson’s research team developed a chemical process to successfully recover cathode powder, which can then be rejuvenated by adding fresh lithium — returning the charging capacity that was lost as the original battery aged. The direct recycling method could make it more profitable to recover battery components while producing fewer greenhouse gas emissions than other recycling processes that use energy-intensive steps to re-manufacture cathode materials. (These involve, at one point, putting the material in a massive furnace.) ReCell’s direct recycling is being done only in laboratory-size batches right now, but Lipson says his team is working with companies to scale up the process.

Battery recycling startups, for their part, are primarily using a technique called hydrometallurgy that dissolves the batteries in acid. Liquid solvents are then used to extract the minerals. Though hydrometallurgy isn’t new, the recycling companies — including American Battery Technology Company and Redwood Materials, both based in northern Nevada and headed by former Tesla engineers — say they are making the process more efficient and recovering more material than in the past.

Ryan Melsert, who heads American Battery Technology, says his time at Tesla’s Gigafactory — a gargantuan facility assembling batteries outside Sparks, Nevada — clued him in to ways to improve the recycling technology. Instead of shredding batteries as old-school hydrometallurgical recycling does, his company will use machines to break down used batteries, and then will separate and sell the lower-value components such as plastic, aluminum and steel. Proprietary chemical reactions will then extract nickel, cobalt, manganese and lithium.

“Instead of just dropping a battery in a furnace or a shredder,” Melsert says, “what our team has done is essentially take many of the same techniques we developed on the manufacturing side and we now operate them in reverse order.” He says the process can recover more than 90 percent of the high-value elements.

Getting the batteries from cars

But to recycle batteries, these startups will need to ensure that the packs make it to their facilities to begin with — a challenge in and of itself because facilities that process junked cars today don’t have protocols for EVs, including how to handle the batteries. Melsert says his company hopes to build on new partnerships with General Motors, Ford and Stellantis (which owns several brands including Dodge, Jeep and Maserati) to ensure that when a car is traded in, the battery will be sent for recycling. And Redwood Materials has announced collaborations with Volkswagen, Toyota, Ford and other automakers on battery collection and recycling.

As I walk with Leber, the construction manager, across American Battery Technology’s future recycling facility, he shows me where the finished goods warehouse will be located, across the building from the truck bays. During the first phase of operation, these finished goods will consist of “black mass,” a crumbly mixture of the valuable metals that will be sold to smelter companies for further refining and resale to battery manufacturers. Eventually, the company plans to add a second stage that will further refine this mix into separate minerals on-site.

American Battery Technology, along with Redwood Materials, Retriev Technologies and Canada-based Li-Cycle — the four main builders of EV battery recycling capacity in the US — have visions on their websites suggesting they are striving to build an infinitely recyclable supply chain. But is truly infinite reuse of battery minerals possible? Experts like Milovanoff and Gutzmer say that’s unlikely due to barriers like labor costs and energy needs. Still, it is technically possible to scale up and recycle more than 90 percent of the lithium, cobalt, nickel and copper in batteries, Lipson says — as long as the economics works out.

Ultimately, the success of battery recycling rests on whether it can be done cheaply enough. Even with improved technology, recyclers may face difficulties making their products cost-competitive with virgin minerals, says Aimee Boulanger, executive director of the Initiative for Responsible Mining Assurance, a coalition that works with companies to improve environmental and labor standards of mining projects. Incentives and regulations may also be needed: In the European Union, regulators have proposed guidelines for sustainable batteries that include their containing a proportion of recycled materials.

Melsert is optimistic. He thinks that since most battery minerals are mined internationally now, the transportation and import costs of virgin minerals will make domestically recycled materials competitive — a calculation supported by some research. In about another two years, he hopes to start building a facility that’s an order of magnitude larger to keep up with growing EV sales. And with demand for minerals outpacing what recycling will, for now, be able to provide, his company also has stakes in mining lithium in central Nevada.

“Some of the largest companies in the world are buying as much recycled battery metals as available,” he says. “The challenge, right now, is really about who can scale up the quickest.”

Saturday, January 13, 2024

Data brokers know everything about you – what FTC case against ad tech giant Kochava reveals

The data collectors see all. DrAfter123/DigitalVision Vectors via Getty Images
Anne Toomey McKenna, University of Richmond

Kochava, the self-proclaimed industry leader in mobile app data analytics, is locked in a legal battle with the Federal Trade Commission in a case that could lead to big changes in the global data marketplace and in Congress’ approach to artificial intelligence and data privacy.

The stakes are high because Kochava’s secretive data acquisition and AI-aided analytics practices are commonplace in the global location data market. In addition to numerous lesser-known data brokers, the mobile data market includes larger players like Foursquare and data market exchanges like Amazon’s AWS Data Exchange. The FTC’s recently unsealed amended complaint against Kochava makes clear that there’s truth to what Kochava advertises: it can provide data for “Any Channel, Any Device, Any Audience,” and buyers can “Measure Everything with Kochava.”

Separately, the FTC is touting a settlement it just reached with data broker Outlogic, in what it calls the “first-ever ban on the use and sale of sensitive location data.” Outlogic has to destroy the location data it has and is barred from collecting or using such information to determine who comes and goes from sensitive locations, like health care centers, homeless and domestic abuse shelters, and religious places.

According to the FTC and proposed class-action lawsuits against Kochava on behalf of adults and children, the company secretly collects, without notice or consent, and otherwise obtains vast amounts of consumer location and personal data. It then analyzes that data using AI, which allows it to predict and influence consumer behavior in an impressively varied and alarmingly invasive number of ways, and serves it up for sale.

Kochava has denied the FTC’s allegations.

The FTC says Kochava sells a “360-degree perspective” on individuals and advertises it can “connect precise geolocation data with email, demographics, devices, households, and channels.” In other words, Kochava takes location data, aggregates it with other data and links it to consumer identities. The data it sells reveals precise information about a person, such as visits to hospitals, “reproductive health clinics, places of worship, homeless and domestic violence shelters, and addiction recovery facilities.” Moreover, by selling such detailed data about people, the FTC says “Kochava is enabling others to identify individuals and exposing them to threats of stigma, stalking, discrimination, job loss, and even physical violence.”

I’m a lawyer and law professor practicing, teaching and researching about AI, data privacy and evidence. These complaints underscore for me that U.S. law has not kept pace with regulation of commercially available data or governance of AI.

Most data privacy regulations in the U.S. were conceived in the pre-generative AI era, and there is no overarching federal law that addresses AI-driven data processing. There are Congressional efforts to regulate the use of AI in decision making, like hiring and sentencing. There are also efforts to provide public transparency around AI’s use. But Congress has yet to pass legislation.

The Federal Trade Commission’s suit against Kochava is set against a backdrop of minimal regulation of data brokers.

What litigation documents reveal

According to the FTC, Kochava secretly collects and then sells its “Kochava Collective” data, which includes precise geolocation data, comprehensive profiles of individual consumers, consumers’ mobile app use details and Kochava’s “audience segments.”

The FTC says Kochava’s audience segments can be based on “behaviors” and sensitive information such as gender identity, political and religious affiliation, race, visits to hospitals and abortion clinics, and people’s medical information, like menstruation and ovulation, and even cancer treatments. By selecting certain audience segments, Kochava customers can identify and target extremely specific groups. For example, this could include people who gender identify as “other,” or all the pregnant females who are African American and Muslim. The FTC says selected audience segments can be narrowed to a specific geographical area or, conceivably, even down to a specific building.

By identify, the FTC explains that Kochava customers are able to obtain the name, home address, email address, economic status and stability, and much more data about people within selected groups. This data is purchased by organizations like advertisers, insurers and political campaigns that seek to narrowly classify and target people. The FTC also says it can be purchased by people who want to harm others.

How Kochava acquires such sensitive data

The FTC says Kochava acquires consumer data in two ways: through Kochava’s software development kits that it provides to app developers, and directly from other data brokers. The FTC says those Kochava-supplied software development kits are installed in over 10,000 apps globally. Kochava’s kits, embedded with Kochava’s coding, collect hordes of data and send it back to Kochava without the consumer being told or consenting to the data collection.

Another lawsuit against Kochava in California alleges similar charges of surreptitious data collection and analysis, and that Kochava sells customized data feeds based on extremely sensitive and private information precisely tailored to its clients’ needs.

The data broker marketplace has been tracking you for years, thanks to mobile phones and web browser cookies.

AI pierces your privacy

The FTC’s complaint also illustrates how advancing AI tools are enabling a new phase in data analysis. Generative AI’s ability to process vast amounts of data is reshaping what can be done with and learned from mobile data in ways that invade privacy. This includes inferring and disclosing sensitive or otherwise legally protected information, like medical records and images.

AI provides the ability both to know and predict just about anything about individuals and groups, even very sensitive behavior. It also makes it possible to manipulate individual and group behavior, inducing decisions in favor of the specific users of the AI tool.

This type of “AI coordinated manipulation” can supplant your decision-making ability without your knowledge.

Privacy in the balance

The FTC enforces laws against unfair and deceptive business practices, and it informed Kochava in 2022 that the company was in violation. Both sides have had some wins and losses in the ongoing case. Senior U.S. District Judge B. Lynn Winmill, who is overseeing the case, dismissed the FTC’s first complaint and required more facts from the FTC. The commission filed an amended complaint that provided much more specific allegations.

Winmill has not yet ruled on another Kochava motion to dismiss the FTC’s case, but as of a Jan. 3, 2024 filing in the case, the parties are proceeding with discovery. A 2025 trial date is expected, but the date has not yet been set.

For now, companies, privacy advocates and policymakers are likely keeping an eye on this case. Its outcome, combined with proposed legislation and the FTC’s focus on generative AI, data and privacy, could spell big changes for how companies acquire data, the ways that AI tools can be used to analyze data, and what data can lawfully be used in machine- and human-based data analytics.

Anne Toomey McKenna, Visiting Professor of Law, University of Richmond

This article is republished from The Conversation under a Creative Commons license. 

Sunday, January 7, 2024

Radiation therapy takes advantage of cancer’s poor DNA repair abilities – an oncologist and physicist explain how

Advances in radiotherapy involve combining new technologies with clinical expertise. Kateryna Kon/Science Photo Library via Getty Images
Behzad Ebrahimi, University of Chicago and Mark Korpics, University of Chicago

Nearly half of all cancer patients undergo radiation therapy as part of their care. Ionizing radiation, or the emission of high-energy waves or particles, works as a therapy by damaging a cancer cell’s DNA. It’s an effective tool for killing cancer cells because they are generally much less adept at DNA repair compared to healthy cells. Damaging specific parts of DNA prevents cancer cells from reproducing, effectively killing them.

A major limitation of radiation therapy is the damage it may cause as it passes through healthy organs to get to tumors located deep in the body. The need to protect healthy organs limits the dose of radiation that can be delivered to cancerous tissue, thereby reducing the chance of successful treatment.

Overcoming this challenge has long been a mission for medical physicists and radiation oncologists like us. Improvements to radiotherapy will enable clinicians to not only better control tumors overall, but also open the door for more favorable outcomes in patients with cancers that are more resistant to radiation.

Fundamentals of radiation therapy

At the heart of radiation therapy lies the fundamental principle that cancer cells are more susceptible to radiation than healthy cells. However, there are exceptions. Sometimes resistance to radiation in cancer cells may be comparable or even greater than that of their neighboring cells.

When tumors lie close to vital organs that are highly sensitive to radiation, such as the brain or the bowels, it significantly limits the amount of radiation that can be delivered. In cases where tumors are significantly less sensitive to radiation than the organs surrounding it, radiotherapy may not be the best choice.

Radiation therapy involves directing strong beams of energy to kill cancer cells.

Patient immobilization is another key aspect to reducing toxicity from radiation. Patients need to be completely motionless during treatments to ensure that the beam of radiation mainly targets tumors and not the healthy tissues surrounding them. If a patient moves during treatment, it can mean the radiation beam is partly or even entirely missing the cancer target. This scenario both underdoses the cancer and increases the risk of harming healthy tissue.

There are a few common types of radiotherapy that deliver radiation in different ways:

External beam radiation therapy

External beam radiation involves directing radiation from an outside source to a single part of the body.

Linear accelerators, commonly referred to as LINACs, are currently the most common technology used in radiation treatments. These machines generate beams of high-energy electrons and X-rays that can be aimed at cancer tissue with precision. The high energy of these beams allows in-depth penetration into the body to reach tumors.

Another form of radiotherapy is proton beam therapy, or PBT, which directs protons instead of X-rays at tumors. Currently, PBT is only available at a limited number of locations. It is typically recommended for specific populations such as pediatric patients because, unlike X-rays, it has an adjustable range that minimizes the effects of radiation on organs beyond the target area, potentially reducing toxicity.

Image-guided radiation therapy

In the early 2000s, researchers incorporated CT scanners into LINACs. This enabled real-time imaging of the patient’s anatomy just before or during treatment. By acting as the eyes of the care team, imaging reduced uncertainty about the location of tumors and improved the precision and accuracy of radiation therapy.

Newer linear accelerators are now incorporating MRIs, which significantly improve visualization of patient anatomy and tumors, further advancing treatment precision and accuracy.

Patient laying on stomach in a radiotherapy machine, green lasers crisscrossed over their exposed back
Patients need to stay still during radiotherapy to ensure the beam stays on course. Mark Kostich/iStock via Getty Images Plus

Researchers are also adding positron emission tomography, or PET, scanners that provide information about the metabolic function of tumors. This advance makes it possible to increase the radiation dose specifically at the most active areas of tumors.

Adaptive therapy

An emerging approach to radiation therapy called adaptive therapy uses imaging to dynamically adjust treatment as the tumor or its positioning changes each day.

In conventional radiation therapy, patients receive the same treatment plan across multiple treatment sessions. However, adaptive therapy may apply several adjustments or lead to a completely new treatment plan in order to address changes to the tumor’s condition over the course of treatment.

Historically, the concept of adaptive therapy was more theoretical than practical. It faced many challenges, including a lack of suitable imaging technologies. Additionally, creating a treatment plan involves heavy computational work and collaboration among various specialties within a care team. The fact that the process needs to be repeated multiple times renders adaptive therapy particularly resource-intensive and time-consuming. However, researchers are looking into ways to use artificial intelligence to automate some of these steps to make this approach more practical.

Brachytherapy

Brachytherapy – stemming from the Greek word “brachys,” meaning “short” – is another widely used method in radiation therapy. It involves placing a radioactive source called a seed near or directly inside a tumor or affected area, reducing the distance from the radiation source to its target.

In contrast to external beam radiation therapy, where radiation often must pass through healthy tissue to reach cancer cells, brachytherapy applies radiation directly to the tumor. This technique is particularly advantageous for certain cancer types that are accessible through noninvasive or minimally invasive procedures, such as skin cancers, gynecological tumors and genitourinary tumors.

Diagram showing the insertion of high-dose radioactive wires to treat prostate cancer
By placing the radioactive source inside the body, brachytherapy can target tumors more directly than external beam radiotherapy. Cancer Research UK/Wikimedia Commons, CC BY-SA

The conventional approach to brachytherapy involved using needles to place or inject radioactive seeds inside or adjacent to the cancerous tissue. These seeds would remain in the body either temporarily or permanently, with the goal of irradiating the area at a low dose over a long period of time.

An increasingly popular form of brachytherapy called high-dose rate brachytherapy involves using a tube to guide a more highly radioactive seed directly to the affected tissue before removing it after a few minutes. The seed emits a high dose of radiation in a short period of time, which enables clinicians to treat patients quickly in an outpatient setting.

Future of radiation therapy

Radiation therapy is continually evolving to more effectively and precisely treat cancer.

For example, a technique that delivers radiation at an ultra-high rate called FLASH-RT has shown promise in its ability to increase dosages without excessive toxicity. Researchers are also exploring treatments using ions heavier than protons to more effectively damage DNA in cancer cells and enhance the efficacy of radiation therapy.

Advances in the field hint at a future of more personalized radiation therapy, highlighting the fusion of technology and medical expertise in the fight against cancer.

Behzad Ebrahimi, Assistant Professor of Radiation and Cellular Oncology, University of Chicago and Mark Korpics, Assistant Professor of Radiation and Cellular Oncology, University of Chicago

This article is republished from The Conversation under a Creative Commons license.