Sunday 02 2024

I’m an astrophysicist mapping the universe with data from the Chandra X-ray Observatory − clear, sharp photos help me study energetic black holes

BlackBull MetaTrader 4
NASA’s Chandra X-ray Observatory detects X-ray emissions from astronomical events. NASA/CXC & J. Vaughan
Giuseppina Fabbiano, Smithsonian Institution

When a star is born or dies, or when any other very energetic phenomenon occurs in the universe, it emits X-rays, which are high-energy light particles that aren’t visible to the naked eye. These X-rays are the same kind that doctors use to take pictures of broken bones inside the body. But instead of looking at the shadows produced by the bones stopping X-rays inside of a person, astronomers detect X-rays flying through space to get images of events such as black holes and supernovae.

Images and spectra – charts showing the distribution of light across different wavelengths from an object – are the two main ways astronomers investigate the universe. Images tell them what things look like and where certain phenomena are happening, while spectra tell them how much energy the photons, or light particles, they are collecting have. Spectra can clue them in to how the event they came from formed. When studying complex objects, they need both imaging and spectra.

Scientists and engineers designed the Chandra X-ray Observatory to detect these X-rays. Since 1999, Chandra’s data has given astronomers incredibly detailed images of some of the universe’s most dramatic events.

The Chandra craft, which looks like a long metal tube with six solar panels coming off it in two wings.
The Chandra spacecraft and its components. NASA/CXC/SAO & J.Vaughan

Stars forming and dying create supernova explosions that send chemical elements out into space. Chandra watches as gas and stars fall into the deep gravitational pulls of black holes, and it bears witness as gas that’s a thousand times hotter than the Sun escapes galaxies in explosive winds. It can see when the gravity of huge masses of dark matter trap that hot gas in gigantic pockets.

An explosion of light and color, and a cloud with points of bright light.
On the left is the Cassiopeia A supernova. The image is about 19 light years across, and different colors in the image identify different chemical elements (red indicates silicon, yellow indicates sulfur, cyan indicates calcium, purple indicates iron and blue indicates high energy). The point at the center could be the neutron star remnant of the exploded star. On the right are the colliding ‘Antennae’ galaxies, which form a gigantic structure about 30,000 light years across. Chandra X-ray Center

NASA designed Chandra to orbit around the Earth because it would not be able to see any of this activity from Earth’s surface. Earth’s atmosphere absorbs X-rays coming from space, which is great for life on Earth because these X-rays can harm biological organisms. But it also means that even if NASA placed Chandra on the highest mountaintop, it still wouldn’t be able to detect any X-rays. NASA needed to send Chandra into space.

I am an astrophysicist at the Smithsonian Astrophysical Observatory, part of the Center for Astrophysics | Harvard and Smithsonian. I’ve been working on Chandra since before it launched 25 years ago, and it’s been a pleasure to see what the observatory can teach astronomers about the universe.

Supermassive black holes and their host galaxies

Astronomers have found supermassive black holes, which have masses ten to 100 million times that of our Sun, in the centers of all galaxies. These supermassive black holes are mostly sitting there peacefully, and astronomers can detect them by looking at the gravitational pull they exert on nearby stars.

But sometimes, stars or clouds fall into these black holes, which activates them and makes the region close to the black hole emit lots of X-rays. Once activated, they are called active galactic nuclei, AGN, or quasars.

My colleagues and I wanted to better understand what happens to the host galaxy once its black hole turns into an AGN. We picked one galaxy, ESO 428-G014, to look at with Chandra.

An AGN can outshine its host galaxy, which means that more light comes from the AGN than all the stars and other objects in the host galaxy. The AGN also deposits a lot of energy within the confines of its host galaxy. This effect, which astronomers call feedback, is an important ingredient for researchers who are building simulations that model how the universe evolves over time. But we still don’t quite know how much of a role the energy from an AGN plays in the formation of stars in its host galaxy.

Luckily, images from Chandra can provide important insight. I use computational techniques to build and process images from the observatory that can tell me about these AGNs.

Three images of a black hole, from low to high resolution, with a bright spot above and right from the center surrounded by clouds.
Getting the ultimate Chandra resolution. From left to right, you see the raw image, the same image at a higher resolution and the image after applying a smoothing algorithm. G. Fabbiano

The active supermassive black hole in ESO 428-G014 produces X-rays that illuminate a large area, extending as far as 15,000 light years away from the black hole. The basic image that I generated of ESO 428-G014 with Chandra data tells me that the region near the center is the brightest, and that there is a large, elongated region of X-ray emission.

The same data, at a slightly higher resolution, shows two distinct regions with high X-ray emissions. There’s a “head,” which encompasses the center, and a slightly curved “tail,” extending down from this central region.

I can also process the data with an adaptive smoothing algorithm that brings the image into an even higher resolution and creates a clearer picture of what the galaxy looks like. This shows clouds of gas around the bright center.

My team has been able to see some of the ways the AGN interacts with the galaxy. The images show nuclear winds sweeping the galaxy, dense clouds and interstellar gas reflecting X-ray light, and jets shooting out radio waves that heat up clouds in the galaxy.

These images are teaching us how this feedback process operates in detail and how to measure how much energy an AGN deposits. These results will help researchers produce more realistic simulations of how the universe evolves.

The next 25 years of X-ray astronomy

The year 2024 marks the 25th year since Chandra started making observations of the sky. My colleagues and I continue to depend on Chandra to answer questions about the origin of the universe that no other telescope can.

By providing astronomers with X-ray data, Chandra’s data supplements information from the Hubble Space Telescope and the James Webb Space Telescope to give astronomers unique answers to open questions in astrophysics, such as where the supermassive black holes found at the centers of all galaxies came from.

For this particular question, astronomers used Chandra to observe a faraway galaxy first observed by the James Webb Space Telescope. This galaxy emitted the light captured by Webb 13.4 billion years ago, when the universe was young. Chandra’s X-ray data revealed a bright supermassive black hole in this galaxy and suggested that supermassive black holes may form by the collapsing clouds in the early universe.

Sharp imaging has been crucial for these discoveries. But Chandra is expected to last only another 10 years. To keep the search for answers going, astronomers will need to start designing a “super Chandra” X-ray observatory that could succeed Chandra in future decades, though NASA has not yet announced any firm plans to do so.The Conversation

Giuseppina Fabbiano, Senior Astrophysicist, Smithsonian Institution

This article is republished from The Conversation under a Creative Commons license.

BlackBull MetaTrader 4

Monday 01 2024

What is Volt Typhoon? A cybersecurity expert explains the Chinese hackers targeting US critical infrastructure

U.S.-China antagonism is particularly acute in the realm of hacking and cybersecurity. AP Photo/Kiichiro Sato
Richard Forno, University of Maryland, Baltimore County

Volt Typhoon is a Chinese state-sponsored hacker group. The United States government and its primary global intelligence partners, known as the Five Eyes, issued a warning on March 19, 2024, about the group’s activity targeting critical infrastructure.

The warning echoes analyses by the cybersecurity community about Chinese state-sponsored hacking in recent years. As with many cyberattacks and attackers, Volt Typhoon has many aliases and also is known as Vanguard Panda, Bronze Silhouette, Dev-0391, UNC3236, Voltzite and Insidious Taurus. Following these latest warnings, China again denied that it engages in offensive cyberespionage.

Volt Typhoon has compromised thousands of devices around the world since it was publicly identified by security analysts at Microsoft in May 2023. However, some analysts in both the government and cybersecurity community believe the group has been targeting infrastructure since mid-2021, and possibly much longer.

Volt Typhoon uses malicious software that penetrates internet-connected systems by exploiting vulnerabilities such as weak administrator passwords, factory default logins and devices that haven’t been updated regularly. The hackers have targeted communications, energy, transportation, water and wastewater systems in the U.S. and its territories, such as Guam.

In many ways, Volt Typhoon functions similarly to traditional botnet operators that have plagued the internet for decades. It takes control of vulnerable internet devices such as routers and security cameras to hide and establish a beachhead in advance of using that system to launch future attacks.

Operating this way makes it difficult for cybersecurity defenders to accurately identify the source of an attack. Worse, defenders could accidentally retaliate against a third party who is unaware that they are caught up in Volt Typhoon’s botnet.

Why Volt Typhoon matters

Disrupting critical infrastructure has the potential to cause economic harm around the world. Volt Typhoon’s operation also poses a threat to the U.S. military by potentially disrupting power and water to military facilities and critical supply chains.

FBI Director Christopher Wray testified at a congressional hearing on Jan. 31, 2024, about Chinese hackers targeting U.S. critical infrastructure.

Microsoft’s 2023 report noted that Volt Typhoon could “disrupt critical communications infrastructure between the United States and Asia region during future crises.” The March 2024 report, published in the U.S. by the Cybersecurity and Infrastructure Security Agency, likewise warned that the botnet could lead to “disruption or destruction of critical services in the event of increased geopolitical tensions and/or military conflict with the United States and its allies.”

Volt Typhoon’s existence and the escalating tensions between China and the U.S., particularly over Taiwan, underscore the latest connection between global events and cybersecurity.

Defending against Volt Typhoon

The FBI reported on Jan. 31, 2024, that it had disrupted Volt Typhoon’s operations by removing the group’s malware from hundreds of small office/home office routers. However, the U.S. is still determining the extent of the group’s infiltration of America’s critical infrastructure.

On March 25, 2024, the U.S. and U.K. announced that they had imposed sanctions on Chinese hackers involved in compromising their infrastructures. And other countries, including New Zealand, have revealed cyberattacks traced back to China in recent years.

All organizations, especially infrastructure providers, must practice time-tested safe computing centered on preparation, detection and response. They must ensure that their information systems and smart devices are properly configured and patched, and that they can log activity. And they should identify and replace any devices at the edges of their networks, such as routers and firewalls, that no longer are supported by their vendor.

Organizations can also implement strong user-authentication measures such as multifactor authentication to make it more difficult for attackers like Volt Typhoon to compromise systems and devices. More broadly, the comprehensive NIST Cybersecurity Framework can help these organizations develop stronger cybersecurity postures to defend against Volt Typhoon and other attackers.

Individuals, too, can take steps to protect themselves and their employers by ensuring their devices are properly updated, enabling multifactor authentication, never reusing passwords, and otherwise remaining vigilant to suspicious activity on their accounts, devices and networks.

For cybersecurity practitioners and society generally, attacks like Volt Typhoon can represent an enormous geopolitical cybersecurity threat. They are a reminder for everyone to monitor what’s going on in the world and consider how current events can affect the confidentiality, integrity and availability of all things digital.

Richard Forno, Principal Lecturer in Computer Science and Electrical Engineering, University of Maryland, Baltimore County

This article is republished from The Conversation under a Creative Commons license. 

Saturday 02 2024

We’ve been here before: AI promised humanlike machines – in 1958

Frank Rosenblatt with the Mark I Perceptron, the first artificial neural network computer, unveiled in 1958. National Museum of the U.S. Navy/Flickr
Danielle Williams, Arts & Sciences at Washington University in St. Louis

A roomsize computer equipped with a new type of circuitry, the Perceptron, was introduced to the world in 1958 in a brief news story buried deep in The New York Times. The story cited the U.S. Navy as saying that the Perceptron would lead to machines that “will be able to walk, talk, see, write, reproduce itself and be conscious of its existence.”

More than six decades later, similar claims are being made about current artificial intelligence. So, what’s changed in the intervening years? In some ways, not much.

The field of artificial intelligence has been running through a boom-and-bust cycle since its early days. Now, as the field is in yet another boom, many proponents of the technology seem to have forgotten the failures of the past – and the reasons for them. While optimism drives progress, it’s worth paying attention to the history.

The Perceptron, invented by Frank Rosenblatt, arguably laid the foundations for AI. The electronic analog computer was a learning machine designed to predict whether an image belonged in one of two categories. This revolutionary machine was filled with wires that physically connected different components together. Modern day artificial neural networks that underpin familiar AI like ChatGPT and DALL-E are software versions of the Perceptron, except with substantially more layers, nodes and connections.

Much like modern-day machine learning, if the Perceptron returned the wrong answer, it would alter its connections so that it could make a better prediction of what comes next the next time around. Familiar modern AI systems work in much the same way. Using a prediction-based format, large language models, or LLMs, are able to produce impressive long-form text-based responses and associate images with text to produce new images based on prompts. These systems get better and better as they interact more with users.

A chart with a horizontal row of nine colored blocks through the center and numerous black vertical lines connecting the blocks with sections of text above and below the blocks
A timeline of the history of AI starting in the 1940s. Click the author’s name here for a PDF of this poster. Danielle J. Williams, CC BY-ND

AI boom and bust

In the decade or so after Rosenblatt unveiled the Mark I Perceptron, experts like Marvin Minsky claimed that the world would “have a machine with the general intelligence of an average human being” by the mid- to late-1970s. But despite some success, humanlike intelligence was nowhere to be found.

It quickly became apparent that the AI systems knew nothing about their subject matter. Without the appropriate background and contextual knowledge, it’s nearly impossible to accurately resolve ambiguities present in everyday language – a task humans perform effortlessly. The first AI “winter,” or period of disillusionment, hit in 1974 following the perceived failure of the Perceptron.

However, by 1980, AI was back in business, and the first official AI boom was in full swing. There were new expert systems, AIs designed to solve problems in specific areas of knowledge, that could identify objects and diagnose diseases from observable data. There were programs that could make complex inferences from simple stories, the first driverless car was ready to hit the road, and robots that could read and play music were playing for live audiences.

But it wasn’t long before the same problems stifled excitement once again. In 1987, the second AI winter hit. Expert systems were failing because they couldn’t handle novel information.

The 1990s changed the way experts approached problems in AI. Although the eventual thaw of the second winter didn’t lead to an official boom, AI underwent substantial changes. Researchers were tackling the problem of knowledge acquisition with data-driven approaches to machine learning that changed how AI acquired knowledge.

This time also marked a return to the neural-network-style perceptron, but this version was far more complex, dynamic and, most importantly, digital. The return to the neural network, along with the invention of the web browser and an increase in computing power, made it easier to collect images, mine for data and distribute datasets for machine learning tasks.

Familiar refrains

Fast forward to today and confidence in AI progress has begun once again to echo promises made nearly 60 years ago. The term “artificial general intelligence” is used to describe the activities of LLMs like those powering AI chatbots like ChatGPT. Artificial general intelligence, or AGI, describes a machine that has intelligence equal to humans, meaning the machine would be self-aware, able to solve problems, learn, plan for the future and possibly be conscious.

Just as Rosenblatt thought his Perceptron was a foundation for a conscious, humanlike machine, so do some contemporary AI theorists about today’s artificial neural networks. In 2023, Microsoft published a paper saying that “GPT-4’s performance is strikingly close to human-level performance.”

Three men sit in chairs on a stage
Executives at big tech companies, including Meta, Google and OpenAI, have set their sights on developing human-level AI. AP Photo/Eric Risberg

But before claiming that LLMs are exhibiting human-level intelligence, it might help to reflect on the cyclical nature of AI progress. Many of the same problems that haunted earlier iterations of AI are still present today. The difference is how those problems manifest.

For example, the knowledge problem persists to this day. ChatGPT continually struggles to respond to idioms, metaphors, rhetorical questions and sarcasm – unique forms of language that go beyond grammatical connections and instead require inferring the meaning of the words based on context.

Artificial neural networks can, with impressive accuracy, pick out objects in complex scenes. But give an AI a picture of a school bus lying on its side and it will very confidently say it’s a snowplow 97% of the time.

Lessons to heed

In fact, it turns out that AI is quite easy to fool in ways that humans would immediately identify. I think it’s a consideration worth taking seriously in light of how things have gone in the past.

The AI of today looks quite different than AI once did, but the problems of the past remain. As the saying goes: History may not repeat itself, but it often rhymes.The Conversation

Danielle Williams, Postdoctoral Fellow in Philosophy of Science, Arts & Sciences at Washington University in St. Louis

This article is republished from The Conversation under a Creative Commons license. 

Tuesday 27 2024

Why federal efforts to protect schools from cybersecurity threats fall short

The cost of safeguarding America’s schools from cybercriminals could run as high as $5 billion. boonchai wedmakawand via Getty Images
Nir Kshetri, University of North Carolina – Greensboro

In August 2023, the White House announced a plan to bolster cybersecurity in K-12 schools – and with good reason. Between 2018 and mid-September 2023, there were 386 recorded cyberattacks in the U.S. education sector and cost those schools $35.1 billion. K-12 schools were the primary target.

The new White House initiative includes a collaboration with federal agencies that have cybersecurity expertise, such as the Cybersecurity and Infrastructure Security Agency, the Federal Communications Commission and the FBI. Technology firms like Amazon, Google, Cloudflare, PowerSchool and D2L have pledged to support the initiative with training and resources.

While the steps taken by the White House are positive, as someone who teaches and conducts research about cybersecurity, I don’t believe the proposed measures are enough to protect schools from cyberthreats. Here are four reasons why:

1. Schools face more cyberthreats than other sectors

Cyberattacks on K-12 schools increased more than eightfold in 2022. Educational institutions draw the interest of cybercriminals due to their weak cybersecurity. This weak cybersecurity provides an opportunity to access networks containing highly sensitive information.

Criminals can exploit students’ information to apply for fraudulent government benefits and open unauthorized bank accounts and credit cards. In testimony to the House Ways and Means Subcommittee on Social Security, a Federal Trade Commission official noted that children’s Social Security numbers are uniquely valuable because they have no credit history and can be paired with any name and date of birth. Over 10% of children enrolled in an identity protection service were discovered to have loans.

Cybercriminals can also use such information to launch ransomware attacks against schools. Ransomware attacks involve locking up a computer or its files and demanding payment for their release. The ransomware victimization rate in the education sector surpasses that of all other surveyed industries, including health care, technology, financial services and manufacturing.

Schools are especially vulnerable to cyberthreats because more and more schools are lending electronic devices to students. Criminals have been found to hide malware within online textbooks and essays to dupe students into downloading it. Should students or teachers inadvertently download malware onto school-owned devices, criminals can launch an attack on the entire school network.

When faced with such an attack, schools can be desperate to comply with criminals’ demands to ensure students’ access to learning.

2. Schools lack cybersecurity personnel

K-12 schools’ poor cybersecurity performance can be attributed, in part, to lack of staff. About two-thirds of school districts lack a full-time cybersecurity position. Those with cybersecurity staff often don’t have the budget for a chief information security officer to oversee and manage the district’s strategy. Often, the IT director takes on this role, but they have a broader responsibility for IT operations without a specific emphasis on security.

3. Schools lack cybersecurity skills

The lack of cybersecurity skills among existing staff hinders the development of strong cybersecurity programs.

Only 10% of educators say that they have a deep understanding of cybersecurity. The majority of students say that they have minimal or no knowledge about cybersecurity. Cybersecurity awareness tends to be even lower in higher-poverty districts, where students have less access to cybersecurity education.

The Cybersecurity and Infrastructure Security Agency plans to provide cybersecurity training to an additional 300 K-12 schools, school districts and other organizations involved in K-12 education in the forthcoming school year. With 130,930 K-12 public schools and 13,187 public school districts in the U.S., CISA’s plan serves only a tiny fraction of them.

4. Inadequate funding

The FCC has proposed a pilot program that would allocate $200 million over three years to boost cyberdefenses. With an annual budget of $66.6 million, this falls short of covering the entirety of cybersecurity costs, given that it will cost an estimated $5 billion to adequately secure the nation’s K-12 schools.

The costs encompass hardware and software procurement, consulting, testing, and hiring data protection experts to combat cyberattacks. Frequent training is also needed to respond to evolving threats. As technology advances, cybercriminals adapt their methods to exploit vulnerabilities in digital systems. Teachers must be ready to address such risks.

Costs are sizable

How much should schools and districts be spending on cybersecurity? Other sectors can serve as a model to guide K-12 schools.

One way to determine cybersecurity funding is by the number of employees. In the financial services industry, for example, these costs range from $1,300 to $3,000 per full-time employee. There are over 4 million teachers in the United States. Setting cybersecurity spending at $1,300 per teacher – the low end of what financial firms spend – would require K-12 schools to spend a total of $5 billion.

An alternate approach is to determine cybersecurity funding relative to IT spending. On average, U.S. enterprises are estimated to spend 10% of their IT budgets on cybersecurity. Since K-12 schools were estimated to spend more than $50 billion on IT in the 2020-21 fiscal year, allocating 10% to cybersecurity would also require them to spend $5 billion.

Another approach is to allocate cybersecurity spending as a proportion of the total budget. In 2019, cybersecurity spending represented 0.3% of the federal budget. Federal, state and local governments collectively allocate $810 billion for K-12 education. If schools set cybersecurity spending at 0.3%, following the example of federal agencies, that would require an annual budget of $2.4 billion.

By contrast, a fifth of schools dedicate less than 1% of their IT budgets – not their entire budgets – to cybersecurity. In 12% of school districts, there is no allocation for cybersecurity at all.The Conversation

Nir Kshetri, Professor of Management, University of North Carolina – Greensboro

This article is republished from The Conversation under a Creative Commons license. 

Saturday 03 2024

Animal CSI: Forensics comes for the wildlife trade

Scientists are using the latest in DNA fingerprinting to combat the multibillion-dollar business of trafficking plants and animals

Campbell’s death was as gruesome as the killers’ previous nine known crimes. Found mutilated in a pool of blood at his home in the district of Albany, South Africa, in June 2016, Campbell had been drugged but was likely in pain before he died from his injuries.

Campbell was a white rhinoceros living on a private reserve, and his killing would be the last hurrah of the now notorious Ndlovu Gang. The three poachers were arrested days later at the Makana Resort in Grahamstown, South Africa, caught red-handed with a bow saw, a tranquilizer dart gun and a freshly removed rhino horn. A variety of evidence, including cellphone records and ballistics analysis of the dart gun, would link them to the crime. But a key element was Campbell’s DNA, found in the horn and on the still-bloody saw.

Among the scientific techniques used to combat poaching and wildlife trafficking, DNA is king, says Cindy Harper, a veterinary geneticist at the University of Pretoria. Its application in animal investigations is small-scale but growing in a field with a huge volume of crime: The value of the illegal wildlife trade is as much as $20 billion per year, Interpol estimates.

“It’s not just a few people swapping animals around,” says Greta Frankham, a wildlife forensic scientist at the Australian Center for Wildlife Genomics in Sydney. “It’s got links to organized crime; it is an enormous amount of turnover on the black market.”

The problem is global. In the United States, the crime might be the illegal hunting of deer or black bears, the importing of protected-animal parts for food or medicinal use, the harvesting of protected cacti, or the trafficking of ivory trinkets. In Africa or Asia, it might be the poaching of pangolins, the globe’s most trafficked mammal for both its meat and its scales, which are used in traditional medicines and magic practices. In Australia, it might be the collection or export of the continent’s unique wildlife for the pet trade.

Techniques used in wildlife forensics are often direct descendants of tools from human crime investigations, and in recent years scientists have adapted and tailored them for use in animals. Harper and colleagues, for example, learned to extract DNA from rhinoceros horns, a task once thought impossible. And by building DNA databases — akin to the FBI’s CODIS database used for human crimes — forensic geneticists can identify a species and more: They might pinpoint a specimen’s geographic origin, family group, or even, in some cases, link a specific animal or animal part to a crime scene.

Adapting this science to animals has contributed to major crime busts, such as the 2021 arrests in an international poaching and wildlife trafficking ring. And scientists are further refining their techniques in the hopes of identifying more challenging evidence samples, such as hides that have been tanned or otherwise degraded.

“Wildlife trafficking investigations are difficult,” says Robert Hammer, a Seattle-based special agent-in-charge with Homeland Security Investigations, the Department of Homeland Security’s arm for investigating diverse crimes, including those involving smuggling, drugs and gang activity. He and his colleagues, he says, rely on DNA and other forensic evidence “to tell the stories of the animals that have been taken.”

First, identify

Wildlife forensics generally starts with a sample sent to a specialized lab by investigators like Hammer. Whereas people-crime investigators generally want to know “Who is it?” wildlife specialists are more often asked “What is this?” — as in, “What species?” That question could apply to anything from shark fins to wood to bear bile, a liver secretion used in traditional medicines.

“We get asked questions about everything from a live animal to a part or a product,” says Barry Baker, deputy laboratory director at the US National Fish and Wildlife Forensics Laboratory in Ashland, Oregon.

Investigators might also ask whether an animal photographed at an airport is a species protected by the Convention on International Trade in Endangered Species of Wild Fauna and Flora, or CITES, in which case import or export is illegal without a permit. They might want to know whether meat brought into the US is from a protected species, such as a nonhuman primate. Or they might want to know if a carved knickknack is real ivory or fake, a difference special lighting can reveal.

While some identifications can be made visually, DNA or other chemical analyses may be required, especially when only part of the creature is available. To identify species, experts turn to the DNA in mitochondria, the cellular energy factories that populate nearly every cell, usually in multiple copies. DNA sequences therein are similar in all animals of the same species, but different between species. By reading those genes and comparing them to sequences in a database such as the Barcode of Life, forensic geneticists can identify a species.

To go further to try to link a specimen to a specific, individual animal, forensic geneticists use the same technique that’s used in human DNA forensics, in this case relying on the majority of DNA contained in the cell’s nucleus. Every genome contains repetitive sequences called microsatellites that vary in length from individual to individual. Measuring several microsatellites creates a DNA fingerprint that is rare, if not unique. In addition, some more advanced techniques use single-letter variations in DNA sequences for fingerprinting.

Comparing the DNA of two samples allows scientists to make a potential match, but it isn’t a clincher: That requires a database of DNA fingerprints from other members of the species to calculate how unlikely it is — say, a one-in-a-million chance — that the two samples came from different individuals. Depending on the species’ genetic diversity and its geographic distribution, a valid database could have as few as 50 individuals or it could require many more, says Ashley Spicer, a wildlife forensic scientist with the California Department of Fish and Wildlife in Sacramento. Such databases don’t exist for all animals and, indeed, obtaining DNA samples from even as few as 50 animals could be a challenge for rare or protected species, Spicer notes.

Investigators use these techniques in diverse ways: An animal may be the victim of a crime, the perpetrator or a witness. And if, say, dogs are used to hunt protected animals, investigators could find themselves with animal evidence related to both victim and suspect.

For witnesses, consider the case of a white cat named Snowball. When a woman disappeared in Richmond, on Canada’s Prince Edward Island, in 1994, a bloodstained leather jacket with 27 white cat hairs in the lining was found near her home. Her body was found in a shallow grave in 1995, and the prime suspect was her estranged common-law husband, who lived with his parents and Snowball, their pet. DNA from the root of one of the jacket hairs matched Snowball’s blood. Though the feline never took the stand, the cat’s evidence spoke volumes, helping to clinch a murder conviction in 1996.


A database for rhinos

The same kind of specific linking of individual animal to physical evidence was also a key element in the case of Campbell the white rhino. Rhino horn is prized: It’s used in traditional Chinese medicine and modern variants of the practice to treat conditions from colds to hangovers to cancer, and is also made into ornaments such as cups and beads. At the time of Campbell’s death, his horn, weighing north of 10 kilograms, was probably worth more than $600,000 — more than its weight in gold — on the black market.

The DNA forensics that helped nab the Ndlovu Gang started with experiments in the early 2000s, when rhino poaching was on the rise. Scientists once thought rhino horns were nothing but densely packed hair, lacking cells that would include DNA, but a 2006 study showed that cells, too, are present. A few years later, Harper’s group reported that even though these cells were dead, they contained viable DNA, and the researchers figured out how to access it by drilling into the horn’s core.

In 2010, a crime investigator from South Africa’s Kruger National Park dropped by Harper’s lab. He was so excited by the potential of her discovery to combat poaching that he ripped a poster describing her results off the wall, rolled it up and took it away with him. Soon after, Harper launched the Rhinoceros DNA Index System, or RhODIS. (The name is a play on the FBI’s CODIS database, for Combined DNA Index System.)

Today, thanks to 2012 legislation from the South African government, anyone in that nation who handles a rhino or its horn — for example, when dehorning animals for the rhinos’ own protection — must send Harper’s team a sample. RhODIS now contains about 100,000 DNA fingerprints, based on 23 microsatellites, from African rhinoceroses both black and white, alive and long dead, including most of the rhinos in South Africa and Namibia, as well as some from other nations.

RhODIS has assisted with numerous investigations, says Rod Potter, a private consultant and wildlife crime investigator who has worked with the South African Police Service for more than four decades. In one case, he recalls, investigators found a suspect with a horn in his possession and used RhODIS to identify the animal before the owner even knew the rhino was dead.

In Campbell’s case, in 2019 the three poachers were convicted, to cheers from observers in the courtroom, of charges related to 10 incidents. Each gang member was sentenced to 25 years in prison.

Today, as rhino poaching has rebounded after a pandemic-induced lull, the RhODIS database remains important. And even when RhODIS can’t link evidence to a specific animal, Potter says, the genetics are often enough to point investigators to the creature’s approximate geographic origin, because genetic markers vary by location and population. And that can help illuminate illegal trade routes.

Elephants also benefit

DNA can make a big impact on investigations into elephant poaching, too. Researchers at the University of Washington in Seattle, for example, measured DNA microsatellites from roving African elephants as well as seized ivory, then built a database and a geographical map of where different genetic markers occur among elephants. The map helps to determine the geographic source of poached, trafficked tusks seized by law enforcement officials.

Elephants travel in matriarchal herds, and DNA markers also run in families, allowing the researchers to determine the relatedness of different tusks, be they from parents, offspring, siblings or half-siblings. When they find tusks from the same elephant or clan in different shipments with a common port, it suggests that the shipments were sent from the same criminal network — which is useful information for law enforcement officials.

This kind of information came in handy during a recent international investigation, called Operation Kuluna, led by Hammer and colleagues at Homeland Security Investigations. It started with a sting: Undercover US investigators purchased African ivory that was advertised online. In 2020, the team spent $14,500 on 49 pounds of elephant ivory that was cut up, painted black, mixed with ebony and shipped to the United States with the label “wood.” The following year, the investigators purchased about five pounds of rhino horn for $18,000. The undercover buyers then expressed interest in lots more inventory, including additional ivory, rhino horns and pangolin scales.

The promise of such a huge score lured two sellers from the Democratic Republic of the Congo (DRC) to come to the United States, expecting to seal the $3.5 million deal. Instead, they were arrested near Seattle and eventually sentenced for their crimes. But the pair were not working alone: Operations like these are complex, says Hammer, “and behind complex conspiracies come money, organizers.” And so the investigators took advantage of elephant genetic and clan data which helped to link the tusks to other seizures. It was like playing “Six Degrees of Kevin Bacon,” says Hammer.

Shortly after the US arrests, Hammer’s counterparts in Africa raided warehouses in the DRC to seize more than 2,000 pounds of ivory and 75 pounds of pangolin scales, worth more than $1 million.

Despite these successes, wildlife forensics remains a small field: The Society for Wildlife Forensic Science has fewer than 200 members in more than 20 countries. And while DNA analysis is powerful, the ability to identify species or individuals is only as good as the genetic databases researchers can compare their samples to. In addition, many samples contain degraded DNA that simply can’t be analyzed — at least, not yet.

Today, in fact, a substantial portion of wildlife trade crimes may go unprosecuted because researchers don’t know what they’re looking at. The situation leaves scientists stymied by that very basic question: “What is this?”

For example, forensic scientists can be flummoxed by animal parts that have been heavily processed. Cooked meat is generally traceable; leather is not. “We have literally never been able to get a DNA sequence out of a tanned product,” says Harper, who wrote about the forensics of poaching in the 2023 Annual Review of Animal Biosciences. In time, that may change: Several researchers are working to improve identification of degraded samples. They might work out ways to do so based on the proteins therein, says Spicer, since these are more resistant than DNA is to destruction by heat or chemistry.

Success, stresses Spicer, will require the cooperation of wildlife forensic scientists around the world. “Anywhere that somebody can get a profit or exploit an animal, they’re going to do it — it happens in every single country,” she says. “And so it’s really essential that we all work together.”

Saturday 27 2024

We urgently need data for equitable personalized medicine

OPINION: A massive bias in medical studies toward men of European origin means that genetic variants in understudied populations don’t get the focus they deserve

Warfarin is a powerful blood thinner and a leading drug for cardiovascular disease worldwide. But in South Africa, it is among the top four drug varieties leading to hospitalization from adverse drug reactions. It’s reasonable to suppose that the drug has similar problematic effects farther across sub-Saharan Africa, though the national data needed to show it are lacking.

The fact that warfarin is riskier in some populations than others isn’t a surprise. Different geographic regions tend to host people with slightly different genetic makeups, and sometimes those genetic differences lead to radically different reactions to drugs. For certain people, a higher dosage of warfarin is fine; for others, it’s dangerous. Researchers have known this for decades.

The problem is that the majority of medical research, including genetic research, is still done mainly on one subset of the world’s population: men of Northern European origin. This means that negative drug-gene interactions in other, less well-studied populations can fly beneath the radar. In the case of warfarin, one study concluded that using someone’s genetic information to help guide their drug dosing would benefit 18 percent to 24 percent of people categorized as white, but have no benefit for people identified as Black, Chinese or Japanese.

While that study is a decade old, the general point still holds true: A bias in our current understanding of the genetics of different populations means that some people would be helped far more than others by genetically informed personalized medicine.

As a bioinformatician, I am now focusing my attention on gathering the statistics to show just how biased medical research data are. There are problems across the board, ranging from which research questions get asked in the first place, to who participates in clinical trials, to who gets their genomes sequenced. The world is moving toward “precision medicine,” where any individual can have their DNA analyzed and that information can be used to help prescribe the right drugs in the right dosages. But this won’t work if a person’s genetic variants have never been identified or studied in the first place.

It’s astonishing how powerful our genetics can be in mediating medicines. Take the gene CYP2D6, which is known to play a vital role in how fast humans metabolize 25 percent of all the pharmaceuticals on the market. If you have a genetic variant of CYP2D6 that makes you metabolize drugs more quickly, or less quickly, it can have a huge impact on how well those drugs work and the dangers you face from taking them. Codeine was banned from all of Ethiopia in 2015, for example, because a high proportion of people in the country (perhaps 30 percent) have a genetic variant of CYP2D6 that makes them quickly metabolize that drug into morphine, making it more likely to cause respiratory distress and even death.

Researchers have identified over a hundred different CYP2D6 variants and there are likely many, many more out there that we don’t yet know the impacts of — especially in understudied populations.

Back in 2016, researchers published an important article looking at more than 2,500 genome-wide association studies done up to that time. These are studies that scan the genomes of thousands of people to find variants associated with disease traits. What the researchers found was disturbing: While there had been some improvement in diversity since 2009, still 81 percent of the nearly 35 million samples in those studies came from people of European descent.

You might expect that, since everyone knows this is a problem, it would have gotten much better over recent years. It hasn’t. In 2021, another study of genome-wide association studies showed that the European-origin proportion had increased, not decreased, from 81 percent to 86 percent.

It’s not just genome-wide studies that have this issue. Direct-to-consumer genetic sequencing services like 23andMe are also skewed: One analysis suggests that 95 percent of the participants have predicted European ancestry, compared to just 2 percent African. And in PharmGKB, one of the world’s leading databases of drug-gene interactions, 64 percent of the data come from people of European ancestry, though this group makes up only 16 percent of the global population. Indigenous Americans account for the smallest amount of the data (just 0.1 percent). But when taking global population into account, it is Central and South Asian people who are least well represented, with only 2 percent of the data but 26 percent of the global population.

People of African descent have the greatest genetic diversity on the planet (because humanity originated in Africa), and so arguably they deserve the greatest amount of study. But this is hardly the situation. This population makes up just 4 percent of the PharmGKB dataset, for example.

Geographic ancestry isn’t the only factor that’s biased. Women make up only 38 percent of participants in studies of drug effectiveness and pharmacokinetics, for example. Because of gender bias all along the line, women experience adverse drug reactions nearly twice as often as men. And this doesn’t even scratch the surface of people with genetic conditions — like my son who has Down syndrome — or other disabilities.

There are some good efforts working to correct these problems. On 18 October 2023, researchers announced plans to create one of the largest-yet databases of genomes exclusively from people with African ancestry. The project aims to recruit at least 500,000 volunteers (for comparison, tens of millions of people globally have had their genomes sequenced to date). This is a great effort; more should follow suit.

Everyone stands to gain from more diverse work. Right now, one clue that researchers use to help determine whether a genetic mutation might be linked to disease (or not) is whether that mutation is rare (or not); if a variant is extremely uncommon, this is one hint that it might be pathogenic (since most people don’t have a given disease). But this could be sending researchers chasing after red herrings. One study published in March 2023, for example, performed whole-genome sequencing on 180 people from 12 indigenous African populations, and found that of 154 mutations labeled “pathogenic” or “likely pathogenic” in a well-known database, 44 were at least five times more frequent in at least one of these African populations. This suggests that those mutations might be benign after all.

The International Covenant on Economic, Social and Cultural Rights, adopted by the United Nations General Assembly on 16 December 1966, recognizes everyone’s rights to enjoy the benefits of scientific progress. But that is not happening yet. We need to ramp up representation in genetic and medical studies to ensure fair treatment for all.

Tuesday 16 2024

What will it take to recycle millions of worn-out EV batteries?

In Nevada and other US states, entrepreneurs are anticipating the coming boom in retired lithium-ion batteries from electric cars and hoping to create a market for recycled minerals

Thirty miles east of Reno, Nevada, past dusty hills patched with muted blue sage and the occasional injury-lawyer billboard, a large concrete structure rises prominently in the desert landscape. When fully constructed, it will be a pilot for a business that entrepreneurs envision as a major facet of America’s future green economy: lithium-ion battery recycling.

Construction manager Chuck Leber points out bays where trucks will drop off batteries, and deep drains in rooms to catch leaking chemicals. He shows me a two-foot concrete slab under the building — a hefty foundation so that workers can move equipment and adapt the plant while refining the recycling process. Later this year, the first batteries will pass through the facility; the goal is to ramp up to handle 20,000 metric tons of batteries a year.

The 60,000-square-foot plant owned by the American Battery Technology Company is an optimistic endeavor to address the inconvenient environmental downside of electric vehicles — their resource-demanding battery packs. It is also a test of whether business leaders can live up to their promises to help build a circular economy: one in which materials are reused indefinitely, minimizing the need to continually pry more minerals from the earth.

Since 2019, electric vehicles — EVs — have more than tripled their share of the auto market, and 6.6 million were sold globally in 2021. Facing pressure and sometimes outright regulation to reduce their climate footprint, many automakers have pledged to stop sales of new combustion-engine vehicles by 2040. “In five years, we are aiming for having tens of millions of electric vehicles on the roads,” says Alexandre Milovanoff of the sustainability consulting company Anthesis Group, who has studied how an EV transition would affect America’s electrical grid. “We’re talking about a market that is exploding.”

To feed the rising EV battery demand, the US government and companies are investing in domestic mining for the needed minerals — including nickel, manganese, cobalt and lithium (the price of which more than quadrupled in 2021). But they are also looking for ways to reduce dependence on newly mined materials through recycling. In March 2022, President Joe Biden invoked the Defense Production Act to bolster supplies of the in-demand minerals, directing domestic investments both in mining and in other forms of recovery.

Researchers say that figuring out recycling could help to avoid the environmental risks of more mining and a buildup of hazardous battery waste — but reprocessing these batteries and refining the metals they contain for reuse is difficult and costly, and many remain skeptical of how truly circular that supply chain can ever be. “An electric vehicle battery is a very complex piece of technology with a lot of different components in it — so a recycling facility is going to be very complicated,” says Michael McKibben, a geologist at the University of California, Riverside. “In the long run, that’s going to be important, but in the short run, it’s got a ways to go.”

Sourcing specific minerals

To power a car, electrons in the battery move from the negative electrode, the anode, to the positive electrode, the cathode. Typically, the anode is made of copper and graphite, while the cathode consists of a class of compounds called lithium metal oxides — ones that contain lithium plus other metals such as cobalt, manganese and nickel.

All of these metals must be sourced — and recycling alone cannot yet meet market needs. Though the US has numerous copper mines (and obtains a sizable chunk of copper from scrap recycling), nearly all of the other metals in lithium-ion batteries come from mines in other countries. More than 80 percent of global lithium comes from Chile, Australia and China, while more than 60 percent of cobalt comes from the Democratic Republic of Congo.

This overseas reliance can come with costs. Much of the lithium mined today, for example, comes from the fragile Atacama Desert in Chile, where the metal is recovered by evaporating salty brine in massive ponds. It’s cost-effective, but researchers and local communities have raised concerns over toxic wastes and the depletion and contamination of water supplies; by one estimate, it takes 500,000 gallons of water — largely lost to evaporation — to concentrate a single metric ton of lithium. Sourcing battery metals also has been connected with human rights abuses in some locations, such as cobalt mining in the Democratic Republic of Congo, where companies have been accused of using child labor, paying workers poorly and failing to provide basic safety equipment.

There’s also a greenhouse-gas price to pay for the long-distance transport of materials: Before anyone even gets in the driver’s seat of a brand-new electric vehicle, some EV battery materials have already traveled tens of thousands of miles. (Still, electric vehicles — with few exceptions — have a smaller carbon footprint than gas-powered cars, and electrifying transportation is key to slashing carbon emissions to stave off disastrous levels of climate change.)

For now, mining remains a need, and researchers think it’s possible to reduce its impacts through domestic operations and new technologies. But they say it is also crucial to ramp up the technology and business models for recycling. After thousands of charges and discharges, cells of lithium-ion batteries dry up and cracks form in the cathode materials, until the battery can neither hold nor deliver enough charge. Millions of EV batteries will soon be reaching this point, and if they’re deposited at the dump, they can leach toxic chemicals and even catch fire. A few US companies collect batteries for recycling, but this capacity lags behind the volume of spent lithium-ion batteries from cars, phones, computers and other electronics. In 2019, US recycling companies diverted from landfills about 15 percent of all retired lithium-ion batteries.

The challenges of recycling

Profitability is a major barrier. Though lithium-ion batteries contain valuable metals, they are challenging to take apart and the minerals are hard to extract from the tight layers of inorganic and organic compounds. By one estimate, the cost of recycled lithium is five times that of virgin lithium from brine-mining. Compare that with lead-acid batteries in combustion cars, which are almost entirely diverted from landfills and recycled. “It’s easy as pie to recycle a lead-acid battery in comparison to a lithium-ion battery,” says geologist Jens Gutzmer, director of the Helmholtz Institute Freiberg for Resource Technology in Germany and coauthor of an article about building a circular metals economy in the Annual Review of Materials Research.

Another problem is that today’s main lithium-ion battery recycling processes are also not particularly efficient. A process used by many recyclers, pyrometallurgy, involves melting down the batteries and burning off plastic separators to extract the coveted metals. Pyrometallurgy is energy-intensive, emits toxic gases and can’t recover some valuable minerals, including lithium, at all.

With growing EV sales, a massive wave of dead electric car batteries will soon exacerbate recycling problems. By 2028, researchers predict that the world will have more than a million metric tons of them to deal with. “I like to compare it to the plastic industry — we have a lot of plastic waste, and people are not really dealing with that — and I’m just worried that this will be happening also with batteries,” says Laura Lander, a materials scientist at King’s College London. And yet if it could be made profitable, scaling up EV battery recycling could, by 2040, reduce the global need for newly mined lithium by 25 percent, and for cobalt and nickel by 35 percent, according to one report prepared by the Institute for Sustainable Futures at the University of Technology Sydney in Australia.

Efforts to improve battery recycling are underway at the Department of Energy’s ReCell Center, a collaboration with national labs and universities launched in 2019. There, researchers are working to scale up what’s called “direct recycling.” This method aims to recapture the cathode material — a carefully manufactured powder — without melting or dissolving the whole battery and destroying the powder in the process. “They put a lot of time and effort into making these beautiful, spherical particles that are about 10 microns in diameter with the right crystal structure,” says Albert Lipson, a materials scientist at Argonne National Laboratory and the ReCell Center.

Lipson’s research team developed a chemical process to successfully recover cathode powder, which can then be rejuvenated by adding fresh lithium — returning the charging capacity that was lost as the original battery aged. The direct recycling method could make it more profitable to recover battery components while producing fewer greenhouse gas emissions than other recycling processes that use energy-intensive steps to re-manufacture cathode materials. (These involve, at one point, putting the material in a massive furnace.) ReCell’s direct recycling is being done only in laboratory-size batches right now, but Lipson says his team is working with companies to scale up the process.

Battery recycling startups, for their part, are primarily using a technique called hydrometallurgy that dissolves the batteries in acid. Liquid solvents are then used to extract the minerals. Though hydrometallurgy isn’t new, the recycling companies — including American Battery Technology Company and Redwood Materials, both based in northern Nevada and headed by former Tesla engineers — say they are making the process more efficient and recovering more material than in the past.

Ryan Melsert, who heads American Battery Technology, says his time at Tesla’s Gigafactory — a gargantuan facility assembling batteries outside Sparks, Nevada — clued him in to ways to improve the recycling technology. Instead of shredding batteries as old-school hydrometallurgical recycling does, his company will use machines to break down used batteries, and then will separate and sell the lower-value components such as plastic, aluminum and steel. Proprietary chemical reactions will then extract nickel, cobalt, manganese and lithium.

“Instead of just dropping a battery in a furnace or a shredder,” Melsert says, “what our team has done is essentially take many of the same techniques we developed on the manufacturing side and we now operate them in reverse order.” He says the process can recover more than 90 percent of the high-value elements.

Getting the batteries from cars

But to recycle batteries, these startups will need to ensure that the packs make it to their facilities to begin with — a challenge in and of itself because facilities that process junked cars today don’t have protocols for EVs, including how to handle the batteries. Melsert says his company hopes to build on new partnerships with General Motors, Ford and Stellantis (which owns several brands including Dodge, Jeep and Maserati) to ensure that when a car is traded in, the battery will be sent for recycling. And Redwood Materials has announced collaborations with Volkswagen, Toyota, Ford and other automakers on battery collection and recycling.

As I walk with Leber, the construction manager, across American Battery Technology’s future recycling facility, he shows me where the finished goods warehouse will be located, across the building from the truck bays. During the first phase of operation, these finished goods will consist of “black mass,” a crumbly mixture of the valuable metals that will be sold to smelter companies for further refining and resale to battery manufacturers. Eventually, the company plans to add a second stage that will further refine this mix into separate minerals on-site.

American Battery Technology, along with Redwood Materials, Retriev Technologies and Canada-based Li-Cycle — the four main builders of EV battery recycling capacity in the US — have visions on their websites suggesting they are striving to build an infinitely recyclable supply chain. But is truly infinite reuse of battery minerals possible? Experts like Milovanoff and Gutzmer say that’s unlikely due to barriers like labor costs and energy needs. Still, it is technically possible to scale up and recycle more than 90 percent of the lithium, cobalt, nickel and copper in batteries, Lipson says — as long as the economics works out.

Ultimately, the success of battery recycling rests on whether it can be done cheaply enough. Even with improved technology, recyclers may face difficulties making their products cost-competitive with virgin minerals, says Aimee Boulanger, executive director of the Initiative for Responsible Mining Assurance, a coalition that works with companies to improve environmental and labor standards of mining projects. Incentives and regulations may also be needed: In the European Union, regulators have proposed guidelines for sustainable batteries that include their containing a proportion of recycled materials.

Melsert is optimistic. He thinks that since most battery minerals are mined internationally now, the transportation and import costs of virgin minerals will make domestically recycled materials competitive — a calculation supported by some research. In about another two years, he hopes to start building a facility that’s an order of magnitude larger to keep up with growing EV sales. And with demand for minerals outpacing what recycling will, for now, be able to provide, his company also has stakes in mining lithium in central Nevada.

“Some of the largest companies in the world are buying as much recycled battery metals as available,” he says. “The challenge, right now, is really about who can scale up the quickest.”

Saturday 13 2024

Data brokers know everything about you – what FTC case against ad tech giant Kochava reveals

The data collectors see all. DrAfter123/DigitalVision Vectors via Getty Images
Anne Toomey McKenna, University of Richmond

Kochava, the self-proclaimed industry leader in mobile app data analytics, is locked in a legal battle with the Federal Trade Commission in a case that could lead to big changes in the global data marketplace and in Congress’ approach to artificial intelligence and data privacy.

The stakes are high because Kochava’s secretive data acquisition and AI-aided analytics practices are commonplace in the global location data market. In addition to numerous lesser-known data brokers, the mobile data market includes larger players like Foursquare and data market exchanges like Amazon’s AWS Data Exchange. The FTC’s recently unsealed amended complaint against Kochava makes clear that there’s truth to what Kochava advertises: it can provide data for “Any Channel, Any Device, Any Audience,” and buyers can “Measure Everything with Kochava.”

Separately, the FTC is touting a settlement it just reached with data broker Outlogic, in what it calls the “first-ever ban on the use and sale of sensitive location data.” Outlogic has to destroy the location data it has and is barred from collecting or using such information to determine who comes and goes from sensitive locations, like health care centers, homeless and domestic abuse shelters, and religious places.

According to the FTC and proposed class-action lawsuits against Kochava on behalf of adults and children, the company secretly collects, without notice or consent, and otherwise obtains vast amounts of consumer location and personal data. It then analyzes that data using AI, which allows it to predict and influence consumer behavior in an impressively varied and alarmingly invasive number of ways, and serves it up for sale.

Kochava has denied the FTC’s allegations.

The FTC says Kochava sells a “360-degree perspective” on individuals and advertises it can “connect precise geolocation data with email, demographics, devices, households, and channels.” In other words, Kochava takes location data, aggregates it with other data and links it to consumer identities. The data it sells reveals precise information about a person, such as visits to hospitals, “reproductive health clinics, places of worship, homeless and domestic violence shelters, and addiction recovery facilities.” Moreover, by selling such detailed data about people, the FTC says “Kochava is enabling others to identify individuals and exposing them to threats of stigma, stalking, discrimination, job loss, and even physical violence.”

I’m a lawyer and law professor practicing, teaching and researching about AI, data privacy and evidence. These complaints underscore for me that U.S. law has not kept pace with regulation of commercially available data or governance of AI.

Most data privacy regulations in the U.S. were conceived in the pre-generative AI era, and there is no overarching federal law that addresses AI-driven data processing. There are Congressional efforts to regulate the use of AI in decision making, like hiring and sentencing. There are also efforts to provide public transparency around AI’s use. But Congress has yet to pass legislation.

The Federal Trade Commission’s suit against Kochava is set against a backdrop of minimal regulation of data brokers.

What litigation documents reveal

According to the FTC, Kochava secretly collects and then sells its “Kochava Collective” data, which includes precise geolocation data, comprehensive profiles of individual consumers, consumers’ mobile app use details and Kochava’s “audience segments.”

The FTC says Kochava’s audience segments can be based on “behaviors” and sensitive information such as gender identity, political and religious affiliation, race, visits to hospitals and abortion clinics, and people’s medical information, like menstruation and ovulation, and even cancer treatments. By selecting certain audience segments, Kochava customers can identify and target extremely specific groups. For example, this could include people who gender identify as “other,” or all the pregnant females who are African American and Muslim. The FTC says selected audience segments can be narrowed to a specific geographical area or, conceivably, even down to a specific building.

By identify, the FTC explains that Kochava customers are able to obtain the name, home address, email address, economic status and stability, and much more data about people within selected groups. This data is purchased by organizations like advertisers, insurers and political campaigns that seek to narrowly classify and target people. The FTC also says it can be purchased by people who want to harm others.

How Kochava acquires such sensitive data

The FTC says Kochava acquires consumer data in two ways: through Kochava’s software development kits that it provides to app developers, and directly from other data brokers. The FTC says those Kochava-supplied software development kits are installed in over 10,000 apps globally. Kochava’s kits, embedded with Kochava’s coding, collect hordes of data and send it back to Kochava without the consumer being told or consenting to the data collection.

Another lawsuit against Kochava in California alleges similar charges of surreptitious data collection and analysis, and that Kochava sells customized data feeds based on extremely sensitive and private information precisely tailored to its clients’ needs.

The data broker marketplace has been tracking you for years, thanks to mobile phones and web browser cookies.

AI pierces your privacy

The FTC’s complaint also illustrates how advancing AI tools are enabling a new phase in data analysis. Generative AI’s ability to process vast amounts of data is reshaping what can be done with and learned from mobile data in ways that invade privacy. This includes inferring and disclosing sensitive or otherwise legally protected information, like medical records and images.

AI provides the ability both to know and predict just about anything about individuals and groups, even very sensitive behavior. It also makes it possible to manipulate individual and group behavior, inducing decisions in favor of the specific users of the AI tool.

This type of “AI coordinated manipulation” can supplant your decision-making ability without your knowledge.

Privacy in the balance

The FTC enforces laws against unfair and deceptive business practices, and it informed Kochava in 2022 that the company was in violation. Both sides have had some wins and losses in the ongoing case. Senior U.S. District Judge B. Lynn Winmill, who is overseeing the case, dismissed the FTC’s first complaint and required more facts from the FTC. The commission filed an amended complaint that provided much more specific allegations.

Winmill has not yet ruled on another Kochava motion to dismiss the FTC’s case, but as of a Jan. 3, 2024 filing in the case, the parties are proceeding with discovery. A 2025 trial date is expected, but the date has not yet been set.

For now, companies, privacy advocates and policymakers are likely keeping an eye on this case. Its outcome, combined with proposed legislation and the FTC’s focus on generative AI, data and privacy, could spell big changes for how companies acquire data, the ways that AI tools can be used to analyze data, and what data can lawfully be used in machine- and human-based data analytics.

Anne Toomey McKenna, Visiting Professor of Law, University of Richmond

This article is republished from The Conversation under a Creative Commons license.