Monday, June 26, 2023

As Ukraine takes the fight to Russians, signs of unease in Moscow over war’s progress

Facing harder questions at home. Contributor/Getty Images
Peter Rutland, Wesleyan University

Editor’s note: This article was written before the events of June 24, 2023, during which Wagner Group paramilitaries seized a town in southern Russia and headed towards Moscow before standing down. On June 25, The Conversation published this article – Wagner’s mutiny punctured Putin’s ‘strongman’ image and exposed cracks in his rule – analyzing how the short-lived mutiny will impact Russian President Vladimir Putin.

Whether or not the Ukraine counteroffensive that began in early June 2023 succeeds in dislodging Russian troops from occupied territory, there are growing signs that the push has prompted anxiety back in Moscow.

Such unease was, I believe, detectable in Russian President Vladimir Putin’s meeting on June 13 with a group of influential military bloggers – people who support the war but have at times been critical of the way it is being fought. The meeting was unusual: In recent months, Putin has avoided public statements about the war and postponed his annual Russia Day phone-in show scheduled for June. He similarly canceled both the June call-in in 2022 as well as his annual news conference in December.

And the set-piece events he has attended have been less than convincing. At the June 13 meeting with military bloggers and war correspondents, Putin faced some pointed questions. In answering, he used the term “war” a number of times – deviating from his line that what is happening in Ukraine is a “special operation” – and conceded that Ukrainian attacks across the border into Russia had been damaging.

On the defensive?

The meeting was Putin’s first public assessment of the conflict since Ukrainian forces took the war into Russian territory, with drone strikes on Moscow on May 1 and again on May 30 and shelling and forays across the border in the Belgorod region on May 22. The latter led to the evacuation of tens of thousands of Russian civilians.

These developments further undermine Putin’s argument that this is a “special military operation” and not a war, and that life can continue as normal for ordinary Russians.

At the same time, Putin is facing a political challenge from Yevgeny Prigozhin, the erstwhile chef-turned-mercenary leader. Prigozhin heads the Wagner Group, a private company that has recruited some 50,000 fighters for the Ukraine war on behalf of Moscow. They played a key role in the capture of the Ukrainian city Bakhmut, which fell on May 20 after a 224-day siege. After the fall of Bakhmut, polling indicated that Prigozhin broke into the list of top 10 trusted officials as seen by ordinary Russians for the first time.

A bald man stands while wearing a dark coat.
Yevgeny Prigozhin, the owner of the Wagner Group. AP Photo

Prigozhin has been openly critical of the way Russian Minister of Defense Sergei Shoigu and head of the General Staff Valery Gerasimov are conducting the war. In May 2023, Prigozhin organized a series of town meetings across Russia laying out his demands. In an attempt to rein in Prigozhin, Shoigu ordered that all volunteer fighters must sign a contract with the defense ministry by July 1 – something Prigozhin refuses to do.

Prigozhin’s business empire includes media outlets, the Internet Research Agency that the U.S. asserts interfered in the 2016 U.S. presidential elections, a movie series and social media channels that enable him to reach tens of millions of Russians. It forms what reporter Scott Johnson has dubbed the “Wagnerverse.”

Facing questions

With the background of more open criticism of a war that has now blown back across the Russian border, Putin faced some tough questions at the meeting with war correspondents.

One asked why is it that private military companies are not legal in Russia. Putin merely said that it is time to change the law.

Another asked why different regions are allowed to pay different bonuses to contract soldiers from their area. In response, Putin could only offer that Russia is a federal system, and regions spend what they can afford. One blogger pointed out that the border districts inside Russia are not considered part of the “special military operation,” which means that soldiers fighting there do not get combat pay. Another asked about troop rotation and when Russians will know that the war has been won. Putin’s answers were equivocal on both points.

One participant asked Putin about the problem of “parquet generals,” a term used by Prigozhin that refers to people sitting in comfortable offices far from the front line. Putin agreed that some generals are not up to the job, but he supported Shoigu’s order that all volunteers should register with the defense ministry.

It wasn’t a full-on grilling, but neither was it a cozy chat.

Desperate measures

Judging by opinion surveys, there are few signs as yet that the military setbacks have caused any decrease in popular support for the war in Russia. Many Russians seem to believe that even if it was wrong to start the war, it would be a mistake to allow Russia to be defeated.

However, members of the Russian elite seem to share the growing unease aired among the bloggers. On May 20-21, Russian officials and policy experts attended a meeting of the influential Council on Foreign and Security Policy think tank. Judging by reports from people who attended, such as State Duma Deputy Konstantin Zatulin, there was a clear sense that the war is going badly.

In a June 1 speech, Zatulin, a prominent nationalist lawmaker, noted that none of the initial goals of the “special operation” have been realized and admitted that “Ukrainians hate us because we are killing them.”

Zatulin said that at the Council on Foreign and Security Policy meeting, one attendee suggested dropping a nuclear bomb on Rzeszow – the transport hub in southeastern Poland through which most of the West’s weapons flow into Ukraine. Indeed, Sergei Karaganov, the head of the council, published an article on June 14 in which he argued for the demonstrative use of a nuclear weapon to force the West to stop supplying arms to Ukraine.

In the 1990s, Karaganov was seen as a liberal who supported Russia’s integration with Europe. Now, he apparently believes that Russia’s inability to defeat Ukraine poses a serious threat to its security. Moreover, his talk of deploying nuclear weapons hints at an increasing view among Russian elites that the country cannot win by conventional means alone. Indeed, on June 16, Putin announced that Russia has already started transferring some tactical nuclear weapons to Belarus.

In the meantime, mercenary leader Prigozhin remains a wild card. Rarely in history have mercenary generals been able to seize political power. Perhaps the most famous mercenary of all time, Albrecht Von Wallenstein, successfully commanded an army of 50,000 during the Thirty Years’ War. He became so powerful that his Hapsburg paymasters had him assassinated.

In Russia’s tightly controlled political landscape, there are no precedents for a figure such as Prigozhin. He seems to have few allies among the military establishment or regional governors. As such, it is hard to imagine a scenario where he would be allowed, for example, to create his own political party, still less run for the presidency in 2024.

Yet he is clearly proving to be a thorn in Putin’s side. And the lack of progress toward victory over Ukraine is seemingly making the Russian elite increasingly nervous about how to maintain social stability and stave off political challenges from nationalists who are arguing for more aggressive prosecution of the war.

Peter Rutland, Professor of Government, Wesleyan University

This article is republished from The Conversation under a Creative Commons license. 

Is this ‘age of the delta’ coming to an end?

The land near the mouth of the Mississippi River is barely land at all. Muddy water forks into a labyrinth of pathways through a seemingly endless expanse of electric-green marsh grass, below skies thick with birds. Shrimp and crabs wriggle in the water below, and oak and cypress sprout from wet soils on higher grounds. Stretching for more than a hundred miles along the coast of Louisiana, this is one of the world’s largest, and most famous, river deltas.

Similar landforms — flat and wet and rich with wildlife — have built up wherever rivers dump their mud and sand into shallow oceans. There are thousands of such deltas across the world, ranging in size from just a few acres to, in the case of the Mississippi River’s, thousands of square miles. Many of these places, including the marshes in Louisiana, are now disappearing, often quite rapidly.

The soils are sinking, for varied and complex reasons. Worse, as the climate warms, the seas are rising. In Louisiana, a football field’s worth of land is gone every hundred minutes, a fact that is devastating for the hundred-million birds that stop here on their migrations — and for the people whose families have lived amid these wetlands for generations, and sometimes millennia. Hundreds of millions of people that live on other deltas, too, could lose their homes and their livelihoods.

Later this year, a state agency in Louisiana will begin construction on what has been called the largest ecosystem restoration project in US history. A set of gates in the levee will allow water to once more carry mud into the marshes, an effort to rebuild tens of thousands of acres of land. The project has been controversial, particularly because it’s likely to affect fisheries, a major local industry.

Deltas are inherently impermanent: As climate changes through the epochs, as the seas rise and fall, deltas are formed and then buried underwater. Still, these past few thousand years can be thought of as an “age of the delta,” says Utrecht University geomorphologist Jaap Nienhuis. Not coincidentally, it has also been an age of human flourishing. “Human civilization started when modern river deltas started to appear,” Nienhuis says. “It’s scary to think of what could happen if river deltas disappear.”

Nienhuis, lead author of a 2023 paper on river deltas for the Annual Review of Earth and Planetary Sciences, spoke with Knowable Magazine about what deltas face in the coming centuries, and whether strategies like Louisiana’s “sediment diversion” can make the needed difference. This interview has been edited for length and clarity.

What are river deltas, and why are they important?

River deltas are places where river sediments settle. Rivers slow down when they reach the sea, and then all the sediments that rivers carry are dropped out. That builds land over time.

A typical delta is a place that’s flooded, sometimes by tides every couple of hours, while some deltas flood only once a year, during a monsoon or a big hurricane. The ecology and biology have to be very well adapted to this periodic inundation.

So what we typically find are swamps: cypress wetlands or mangrove forests or salt marshes, which are well-adjusted to dealing with inundation by tides. These are great places for fish. The river-carried sediment is rich with organic material, which is good for agriculture. So it’s not a surprise that many population centers are on river deltas: New Orleans, Shanghai, Tokyo, Jakarta. Much of the Netherlands is a delta. In total, 500 million people live atop river deltas across the world.

How do deltas form?

Deltas exist on a balance. Some forces are constructive. Basically, you need the river to supply a lot of sediment. That can happen naturally, through mountain-building and then erosion — that’s why we have a lot of big deltas downstream of the Himalayas.

Then there are destructive forces. Sea level is a first-order control on the evolution of deltas. When the seas are rising, it’s hard for rivers to supply enough sediment to keep up.

To understand today’s deltas, we have to go back 20,000 years, to a time when most of the planet was covered by ice sheets. A change in the Earth’s orientation toward the sun kickstarted a melting of the ice and a rise in sea level. So from 20,000 years ago to 10,000 or 8,000 years ago, we had a long period of very quickly rising seas. There wasn’t a lot of delta-building happening, and older deltas were being smothered by the water.

Then, when most of these ice sheets were nearly melted, the rates of sea-level rise began to slow. That’s when all the sediments that came out of rivers were able to build the modern deltas. That happened for most river deltas roughly around the same time — 7,000 years ago. Of course, when they first appeared, these deltas were very small. They’ve built over time.

Interestingly, the start of several ancient civilizations and ancient cities coincides with that deceleration in sea-level rise. This is not a coincidence. Humans and deltas are very intertwined.

It sounds like our species has depended on deltas throughout our recent history. Have we also affected deltas?

We have. Humans can actually be a constructive force for deltas. What we’ve seen over the past few hundred years is that deforestation upstream in a river’s watershed can cause an increase in the amount of sediment that is brought down by a river. So in that sense, humans have greatly accelerated delta growth over the past centuries.

But humans can be a destructive force, too. There is subsidence, for example: land lowering, which happens a lot in deltas because they are built from young, soft soils.

Here in the Netherlands, we started to drain the delta because we wanted to use it for agriculture. And then subsidence accelerates and the land sinks beneath your feet, since the water in the soil dries out. So now you build levees to protect yourself from all the extra flooding.

But now, because of the levees, sediments are no longer supplied to your part of the delta, so you sink even further. So you build more levees. The reason why the Netherlands — and also parts of New Orleans — are below sea level is because of that cycle. You create a situation you cannot get out of. We call that a “lock-in.”

If the constructive and destructive forces are in balance, deltas will stay the same. But it’s a very, very, very thin balance. And humans, most of the time, we’re destructive.

Do we have a sense of how big of a problem subsidence is?

We have a very good overview of all the things that could cause subsidence. But it’s very hard to measure it.

Subsidence happens on many scales at once. If ice sheets melt in, to give just one example, Canada, that can take weight off the Earth’s mantle, causing it to tilt. That means that as the ice melts — and land lifts — in Canada, the land as far away as the Mississippi delta can wind up sinking. So there are changes happening on a scale of thousands of miles.

At the same time, if you build a house and add weight to the soil, you also sink the soil. Even just standing on it, jumping on the ground — you’re changing the pore structure in the grains of soil below.

Imagine trying to also integrate what vegetation does with its roots, plus worms that go through the ground. Then you have oil extraction or taking out groundwater — changes on a medium scale. So it’s hard to validate and test our models. It’s very much a limiting factor in our ability to say something about the future of deltas.

Really, though, given how big a control sea-level rise is on delta construction, the biggest problem now is climate change. As ice sheets begin to melt again, that’s causing seas to rise much more quickly than they have in thousands of years.

It sounds like we’ve been in an “age of the delta,” but that’s now coming to an end.

Definitely. If we zoom out and look at deltas as a whole, we could lose as much as 5 percent of the land by the end of this century, mostly because of sea-level rise. And as much as 50 percent of the land — 130,000 square miles — in only a few hundred years.

So it could go very, very fast. Since sea level is so important here, a lot of that depends on what we do in terms of CO2 emissions and climate change.

We’re not going to lose every delta. When deltas are very small, and uninhabited by humans, it’s very easy for them to respond to sea level. Think of Arctic deltas, for example. Basically, when seas rise, these deltas walk with sea level up the river — the delta advances upstream, replacing what was once solid ground with wetlands.

But for many deltas, we’ve built levees, so they’re fixed — they’re not moving anywhere. And big deltas are hard to sustain. They need a lot of sediment. Those deltas are much more uncertain.

Eventually, sea-level rise will slow again, and then we’ll reenter an age of the delta, when rivers can build again. Ironically, if we continue emissions at a high rate that might happen sooner, because we will speed up the tipping point where the world’s ice sheets collapse. If we continue emissions at a high rate, then sea-level rise might slow again around the year 2400, and deltas may be able to grow again by the end of this millennium, once much of Greenland and Antarctica have melted.

Though with seas 20 meters higher, they’ll be in new places. The Mississippi delta will be built somewhere close to the border with Arkansas. The Dutch Rhine delta will be in today’s Germany.

Is there anything we can do to save today’s deltas?

There are adaptation methods that can work for individual deltas. One key idea is trying to use the sediments coming down from the river in a smarter way. That typically means breaking down levees and restoring the natural flow of the river into the delta’s wetlands. These projects are sometimes called “river diversions.” The biggest projects — both in terms of cost and in terms of how much sediment is delivered to the delta — are happening in the Mississippi River delta.

But even if you include all the projects that are still in the planning stage, these kinds of restoration strategies are being applied to save 0.1 percent of the world’s deltas. That’s very small compared to the potential loss of land.

I’m a little bit skeptical about our ability to scale these strategies up and solve the whole delta problem. River diversions lead to high sedimentation rates, but these rates decline over time. The sedimentation rate is dependent on the elevation difference, so as you build your land, it gets harder and harder to keep up. And diversions work on very small parts of a delta. If one sediment diversion works very well in the Mississippi, you can’t say, “I’m going to build another right next to it.” At some point, you’re going to run out of sediment.

Most river deltas will persist throughout my lifetime. But to think about my daughter’s lifetime, or my grandchildren’s — that’s going to be very, very different. It’s very strange to think about these places not existing.

That’s a dark future. Is there any reason to be optimistic?

If we really stick to the Paris Agreement, an international treaty that aims to keep global temperature rise below 2 degrees Celsius, then the picture is going to be much more nuanced. Then the optimism I have is that the age of the delta will last a long time still.

It’s not really a question of whether we’ll lose delta land, but more a question of when we’ll lose it. If we really limit our emissions, we might be able to postpone the major losses for 500 years, or a thousand years — and by then, who knows? We might all be living in floating cities!

The optimism has to be that it goes slow enough that we come up with new solutions. But in order to give ourselves the time we need, we have to start addressing climate change now.

This article originally appeared in Knowable Magazine, an independent journalistic endeavor from Annual Reviews. 

Are you part robot? A linguistic anthropologist explains how humans are like ChatGPT – both recycle language

Are we as different as we’d like to believe? Ledi Nuge/iStock via Getty Images
Brendan H. O'Connor, Arizona State University

ChatGPT is a hot topic at my university, where faculty members are deeply concerned about academic integrity, while administrators urge us to “embrace the benefits” of this “new frontier.” It’s a classic example of what my colleague Punya Mishra calls the “doom-hype cycle” around new technologies. Likewise, media coverage of human-AI interaction – whether paranoid or starry-eyed – tends to emphasize its newness.

In one sense, it is undeniably new. Interactions with ChatGPT can feel unprecedented, as when a tech journalist couldn’t get a chatbot to stop declaring its love for him. In my view, however, the boundary between humans and machines, in terms of the way we interact with one another, is fuzzier than most people would care to admit, and this fuzziness accounts for a good deal of the discourse swirling around ChatGPT.

When I’m asked to check a box to confirm I’m not a robot, I don’t give it a second thought – of course I’m not a robot. On the other hand, when my email client suggests a word or phrase to complete my sentence, or when my phone guesses the next word I’m about to text, I start to doubt myself. Is that what I meant to say? Would it have occurred to me if the application hadn’t suggested it? Am I part robot? These large language models have been trained on massive amounts of “natural” human language. Does this make the robots part human?

A typical 'captcha' message featuring a square on the left, the words 'I am not a robot' in the middle and three interconnected curved arrows forming a semicircle
No, you’re not a robot, but your language is not so different from an AI chatbot’s. Ihor Reshetniak/iStock via Getty Images

AI chatbots are new, but public debates over language change are not. As a linguistic anthropologist, I find human reactions to ChatGPT the most interesting thing about it. Looking carefully at such reactions reveals the beliefs about language underlying people’s ambivalent, uneasy, still-evolving relationship with AI interlocutors.

ChatGPT and the like hold up a mirror to human language. Humans are both highly original and unoriginal when it comes to language. Chatbots reflect this, revealing tendencies and patterns that are already present in interactions with other humans.

Creators or mimics?

Recently, famed linguist Noam Chomsky and his colleagues argued that chatbots are “stuck in a prehuman or nonhuman phase of cognitive evolution” because they can only describe and predict, not explain. Rather than drawing on an infinite capacity to generate new phrases, they compensate with huge amounts of input, which allows them to make predictions about which words to use with a high degree of accuracy.

This is in line with Chomsky’s historic recognition that human language could not be produced merely through children’s imitation of adult speakers. The human language faculty had to be generative, since children do not receive enough input to account for all the forms they produce, many of which they could not have heard before. That is the only way to explain why humans – unlike other animals with sophisticated systems of communication – have a theoretically infinite capacity to generate new phrases.

Noam Chomsky developed the generative theory of language acquisition.

There’s a problem with that argument, though. Even though humans are endlessly capable of generating new strings of language, people usually don’t. Humans are constantly recycling bits of language they’ve encountered before and shaping their speech in ways that respond – consciously or unconsciously – to the speech of others, present or absent.

As Mikhail Bakhtin – a Chomsky-like figure for linguistic anthropologists – put it, “our thought itself,” along with our language, “is born and shaped in the process of interaction and struggle with others’ thought.” Our words “taste” of the contexts where we and others have encountered them before, so we’re constantly wrestling to make them our own.

Even plagiarism is less straightforward than it appears. The concept of stealing someone else’s words assumes that communication always takes place between people who independently come up with their own original ideas and phrases. People may like to think of themselves that way, but the reality shows otherwise in nearly every interaction – when I parrot a saying of my dad’s to my daughter; when the president gives a speech that someone else crafted, expressing the views of an outside interest group; or when a therapist interacts with her client according to principles that her teachers taught her to heed.

In any given interaction, the framework for production – speaking or writing – and reception – listening or reading and understanding – varies in terms of what is said, how it is said, who says it and who is responsible in each case.

What AI reveals about humans

The popular conception of human language views communication primarily as something that takes place between people who invent new phrases from scratch. However, that assumption breaks down when Woebot, an AI therapy app, is trained to interact with human clients by human therapists, using conversations from human-to-human therapy sessions. It breaks down when one of my favorite songwriters, Colin Meloy of The Decemberists, tells ChatGPT to write lyrics and chords in his own style. Meloy found the resulting song “remarkably mediocre” and lacking in intuition, but also uncannily in the zone of a Decemberists song.

As Meloy notes, however, the chord progressions, themes and rhymes in human-written pop songs also tend to mirror other pop songs, just as politicians’ speeches draw freely from past generations of politicians and activists, which were already replete with phrases from the Bible. Pop songs and political speeches are especially vivid illustrations of a more general phenomenon. When anyone speaks or writes, how much is newly generated à la Chomsky? How much is recycled à la Bakhtin? Are we part robot? Are the robots part human?

People like Chomsky who say that chatbots are unlike human speakers are right. However, so are those like Bakhtin who point out that we’re never really in control of our words – at least, not as much as we’d imagine ourselves to be. In that sense, ChatGPT forces us to consider an age-old question anew: How much of our language is really ours?

Brendan H. O'Connor, Associate Professor, School of Transborder Studies, Arizona State University

This article is republished from The Conversation under a Creative Commons license. 

Titan submersible disaster underscores dangers of deep-sea exploration – an engineer explains why most ocean science is conducted with crewless submarines

Researchers are increasingly using small, autonomous underwater robots to collect data in the world’s oceans. NOAA Teacher at Sea Program,NOAA Ship PISCES, CC BY-SA
Nina Mahmoudian, Purdue University

Rescuers spotted debris from the tourist submarine Titan on the ocean floor near the wreck of the Titanic on June 22, 2023, indicating that the vessel suffered a catastrophic failure and the five people aboard were killed.

Bringing people to the bottom of the deep ocean is inherently dangerous. At the same time, climate change means collecting data from the world’s oceans is more vital than ever. Purdue University mechanical engineer Nina Mahmoudian explains how researchers reduce the risks and costs associated with deep-sea exploration: Send down subs, but keep people on the surface.

Why is most underwater research conducted with remotely operated and autonomous underwater vehicles?

When we talk about water studies, we’re talking about vast areas. And covering vast areas requires tools that can work for extended periods of time, sometimes months. Having people aboard underwater vehicles, especially for such long periods of time, is expensive and dangerous.

One of the tools researchers use is remotely operated vehicles, or ROVs. Basically, there is a cable between the vehicle and operator that allows the operator to command and move the vehicle, and the vehicle can relay data in real time. ROV technology has progressed a lot to be able to reach deep ocean – up to a depth of 6,000 meters (19,685 feet). It’s also better able to provide the mobility necessary for observing the sea bed and gathering data.

Autonomous underwater vehicles provide another opportunity for underwater exploration. They are usually not tethered to a ship. They are typically programmed ahead of time to do a specific mission. And while they are underwater they usually don’t have constant communication. At some interval, they surface, relay the whole amount of data that they have gathered, change the battery or recharge and receive renewed instructions before again submerging and continuing their mission.

What can remotely operated and autonomous underwater vehicles do that crewed submersibles can’t, and vice versa?

Crewed submersibles will be exciting for the public and those involved and helpful for the increased capabilities humans bring in operating instruments and making decisions, similar to crewed space exploration. However, it will be much more expensive compared with uncrewed explorations because of the required size of the platforms and the need for life-support systems and safety systems. Crewed submersibles today cost tens of thousands of dollars a day to operate.

Use of unmanned systems will provide better opportunities for exploration at less cost and risk in operating over vast areas and in inhospitable locations. Using remotely operated and autonomous underwater vehicles gives operators the opportunity to perform tasks that are dangerous for humans, like observing under ice and detecting underwater mines.

Remotely operated vehicles can operate under Antarctic ice and other dangerous places.

How has the technology for deep ocean research evolved?

The technology has advanced dramatically in recent years due to progress in sensors and computation. There has been great progress in miniaturization of acoustic sensors and sonars for use underwater. Computers have also become more miniaturized, capable and power efficient. There has been a lot of work on battery technology and connectors that are watertight. Additive manufacturing and 3D printing also help build hulls and components that can withstand the high pressures at depth at much lower costs.

There has also been great progress toward increasing autonomy using more advanced algorithms, in addition to traditional methods for navigation, localization and detection. For example, machine learning algorithms can help a vehicle detect and classify objects, whether stationary like a pipeline or mobile like schools of fish.

What kinds of discoveries have been made using remotely operated and autonomous underwater vehicles?

One example is underwater gliders. These are buoyancy-driven autonomous underwater vehicles. They can stay in water for months. They can collect data on pressure, temperature and salinity as they go up and down in water. All of these are very helpful for researchers to have an understanding of changes that are happening in oceans.

One of these platforms traveled across the North Atlantic Ocean from the coast of Massachusetts to Ireland for nearly a year in 2016 and 2017. The amount of data that was captured in that amount of time was unprecedented. To put it in perspective, a vehicle like that costs about $200,000. The operators were remote. Every eight hours the glider came to the surface, got connected to GPS and said, “Hey, I am here,” and the crew basically gave it the plan for the next leg of the mission. If a crewed ship was sent to gather that amount of data for that long it would cost in the millions.

In 2019, researchers used an autonomous underwater vehicle to collect invaluable data about the seabed beneath the Thwaites glacier in Antarctica.

Energy companies are also using remotely operated and autonomous underwater vehicles for inspecting and monitoring offshore renewable energy and oil and gas infrastructure on the seabed.

Where is the technology headed?

Underwater systems are slow-moving platforms, and if researchers can deploy them in large numbers that would give them an advantage for covering large areas of ocean. A great deal of effort is being put into coordination and fleet-oriented autonomy of these platforms, as well as into advancing data gathering using onboard sensors such as cameras, sonars and dissolved oxygen sensors. Another aspect of advancing vehicle autonomy is real-time underwater decision-making and data analysis.

What is the focus of your research on these submersibles?

My team and I focus on developing navigational and mission-planning algorithms for persistent operations, meaning long-term missions with minimal human oversight. The goal is to respond to two of the main constraints in the deployment of autonomous systems. One is battery life. The other is unknown situations.

The author’s research includes a project to allow autonomous underwater vehicles to recharge their batteries without human intervention.

For battery life, we work on at-sea recharging, both underwater and surface water. We are developing tools for autonomous deployment, recovery, recharging and data transfer for longer missions at sea. For unknown situations, we are working on recognizing and avoiding obstacles and adapting to different ocean currents – basically allowing a vehicle to navigate in rough conditions on its own.

To adapt to changing dynamics and component failures, we are working on methodologies to help the vehicle detect the change and compensate to be able to continue and finish the mission.

These efforts will enable long-term ocean studies including observing environmental conditions and mapping uncharted areas.

Nina Mahmoudian, Associate Professor of Mechanical Engineering, Purdue University

This article is republished from The Conversation under a Creative Commons license. 

Moms lose significant sleep and free time during kids’ school year, new study finds

Moms get about 25 minutes less sleep each weeknight when their kids’ school is in session. Jose Luis Pelaez Inc/DigitalVision Collection/Getty Images
Todd Jones, Mississippi State University; Benjamin Cowan, Washington State University, and Jeff Swigert, Southern Utah University

The Research Brief is a short take about interesting academic work.

The big idea

Moms of school-age children get significantly less sleep during the school year than during the summer.

We are economists who specialize in education and health research. We combined extensive data on school district schedules with information derived from the American Time Use Survey to explore the ways families use their time differently depending on whether school is in session or out for summer.

We observed mothers on average tend to sleep 25 minutes less, have 28 minutes less free time and allocate seven minutes less for exercise on weekdays during the school year than during the summer. For comparison, fathers reduce sleep by 11 minutes during the school year relative to the summer, have 21 minutes less free time and five fewer exercise minutes.

Conversely, mothers spend about half an hour more per day during the school year taking care of others, including kids, and five additional minutes on travel – which often involves driving their kids to and from school.

Interestingly, even though both mothers and fathers spend more time physically present with children in their household during summer months, both spend more time actively engaged with the children – such as helping with homework or reading together – during the school year. However, the effect is almost three times greater for women than it is for men: Moms spend an extra 34 minutes per day during the school year actively engaged with the children versus an extra 12 minutes for dads.

Our study also observed teenagers ages 15-17, as they are the only children included in the time use survey.

During the school year, teenagers sleep about one hour and 20 minutes – or 13% – less than they do during the summer, and they have over two hours – or 33% – less free time each day. This reduction in free time includes nearly an hour and a half less time spent each day on television, games – including video games – and computer use.

Why it matters

Prior research has shown there is a gender gap in mental health, with women faring worse than men on measures such as anxiety and depression. Women are also four times as likely as men to be diagnosed with seasonal affective disorder, a type of depression that typically occurs in fall and winter.

Our results suggest the possibility that these issues are exacerbated by the greater demands placed on mothers during the school year.

Regarding teenagers getting more sleep, our findings support arguments for later school start times so that teens can get more sleep. The American Academy of Pediatrics has recommended middle and high schools start no earlier than 8:30 a.m. so that adolescents can get sufficient sleep to support mental health and academic achievement. However, the average start time for U.S. high schools is 8 a.m..

Our results also suggest that when school is out, teenagers may be especially susceptible to media overconsumption. Teens themselves say they spend too much time on screens.

What we still don’t know

We do not yet know how these changes in schedules affect teen mental health. While some measures of teen mental health improve during summer months, we found that teenagers spend the lion’s share of their extra summer free time in front of screens, and studies have linked excessive screen time to higher levels of depression and poorer mental health.

Todd Jones, Assistant Professor of Economics, Mississippi State University; Benjamin Cowan, Associate Professor of Economics, Washington State University, and Jeff Swigert, Assistant Professor of Economics, Southern Utah University

This article is republished from The Conversation under a Creative Commons license. 

Family Fun with a Pop: Kid-friendly snacks and treats perfect for sharing

When hunger strikes and family fun calls for a snack to keep the excitement going, take your kiddos to the kitchen to whip up a tasty treat for all to enjoy. Whether you’re celebrating a special occasion or watching favorite movies, turn to an ingredient like popcorn that encourages creativity. Plus, it’s a delicious way for adults to turn back the clock and recreate childhood memories while making new ones with the family.

Consider these popcorn-inspired snacks to serve during your next adventure:

  • Take yourself back in time to your own childhood with a classic pairing that never goes out of style: Peanut Butter and Jelly Popcorn. This version offers a fun way to jazz up popcorn for kids and adults alike with a sprinkling of strawberries and dried cranberries on top for added nutrition.
  • When it’s time for a little fun in the sun or a trip to the park, sweeten up the celebration with this Graham Cracker Picnic Mix that’s a flavorful, kid-favorite combination of popcorn, bear-shaped graham crackers, marshmallows, fudge-covered pretzels and miniature cookies.
  • Pizza is hard to beat as a snacking solution the whole family can enjoy, and this Grab and Go Pizza Popcorn is no exception. Ready in just minutes by mixing popcorn, Parmesan cheese and seasonings, little ones can help prepare a batch quickly before heading out the door.
  • Colorful and fun, these Minty Green Popcorn Clusters are simple to make and a perfect treat to share. Just melt together marshmallows, butter, salt, green food coloring and peppermint extract before tossing with popcorn and green candy-coated chocolates.
To find more family-friendly snack time favorites, visit

Peanut Butter and Jelly Popcorn

Servings: 4-6

  • 8 cups popcorn
  • 2 tablespoons raspberry jam
  • 1/2 cup white chocolate chips
  • 3 tablespoons smooth natural peanut butter
  • 1/4 cup freeze-dried strawberries
  • 1/4 cup dried cranberries
  1. In large bowl, stir popcorn and jam until evenly coated. Transfer to parchment paper-lined baking sheet.
  2. In small heatproof bowl over small saucepan of barely simmering water, add white chocolate chips and peanut butter. Cook, stirring occasionally, 3-5 minutes, or until smooth and melted.
  3. Drizzle peanut butter mixture over popcorn mixture. Sprinkle with strawberries and cranberries.
  4. Refrigerate 10 minutes, or until peanut butter mixture is set; break into pieces to serve. Store in airtight container in refrigerator up to 1 week.

Graham Cracker Picnic Mix

Yield: About 12 cups

  • 1/4 cup vegetable oil
  • 1/4 cup sugar
  • 5-6 drops pink neon liquid food color or desired color
  • 1/2 cup unpopped popcorn kernels
  • 1 package mini pastel marshmallows
  • 1 cup bear-shaped graham crackers
  • 1 package white fudge-covered pretzels
  • 1 cup mini cookies
  1. In large pot over medium heat, stir oil, sugar and food coloring.
  2. Stir in popcorn kernels; cover. Cook until popcorn begins to pop. Shake pot over heat until popping slows.
  3. Remove pan from heat and pour popcorn into large bowl.
  4. Add marshmallows, graham crackers, pretzels and cookies then toss lightly.

Grab and Go Pizza Popcorn

Yield: 6 quarts

  • 6 quarts popped popcorn
  • olive oil cooking spray
  • 1 cup grated Parmesan cheese
  • 2 teaspoons garlic salt
  • 2 teaspoons paprika
  • 1 tablespoon Italian seasoning
  1. Place popcorn in large, sealable plastic container or 2 1/2-gallon plastic sealable bag.
  2. Spray popcorn lightly with cooking spray.
  3. Sprinkle cheese, garlic salt, paprika and Italian seasoning over popcorn and shake to distribute evenly.
  4. To serve, scoop popcorn into reusable plastic cups.

Minty Green Popcorn Clusters

Yield: 32 clusters

  • 12 cups unsalted, unbuttered, popped popcorn
  • 4 cups mini marshmallows
  • 2 tablespoons butter or light olive oil
  • 1/2 teaspoon salt
  • 1 tablespoon green food coloring
  • 1/4 teaspoon peppermint extract
  • 1 cup green candy-coated chocolate candies
  1. Place popcorn in large mixing bowl.
  2. In saucepan over medium heat, melt marshmallows, butter and salt, stirring occasionally, until smooth. Remove from heat; stir in food coloring and peppermint extract.
  3. Toss marshmallow mixture with popcorn; cool 2-3 minutes, or until cool enough to handle. Toss with chocolate candies.
  4. Shape 3 tablespoons of popcorn mixture into small cluster; repeat with remaining popcorn mixture. Place on wax paper-lined baking sheet; cool completely.
Popcorn Board

Almost no one uses Bitcoin as currency, new data proves. It’s actually more like gambling

John Hawkins, University of Canberra

Bitcoin boosters like to claim Bitcoin, and other cryptocurrencies, are becoming mainstream. There’s a good reason to want people to believe this.

The only way the average punter will profit from crypto is to sell it for more than they bought it. So it’s important to talk up the prospects to build a “fear of missing out”.

There are loose claims that a large proportion of the population – generally in the range of 10% to 20% – now hold crypto. Sometimes these numbers are based on counting crypto wallets, or on surveying wealthy people.

But the hard data on Bitcoin use shows it is rarely bought for the purpose it ostensibly exists: to buy things.

Little use for payments

The whole point of Bitcoin, as its creator “Satoshi Nakamoto” stated in the opening sentence of the 2008 white paper outlining the concept, was that:

A purely peer-to-peer version of electronic cash would allow online payments to be sent directly from one party to another without going through a financial institution.

The latest data demolishing this idea comes from Australia’s central bank.

Every three years the Reserve Bank of Australia surveys a representative sample of 1,000 adults about how they pay for things. As the following graph shows, cryptocurrency is making almost no impression as a payments instrument, being used by no more than 2% of adults.

Payment methods being used by Australians

Alternative payment methods, share of all respondents, 2022
Reserve Bank calculations of Australians' awareness vs use of different payment methods, based on Ipsos data.

By contrast more recent innovations, such as “buy now, pay later” services and PayID, are being used by around a third of consumers.

These findings confirm 2022 data from the US Federal Reserve, showing just 2% of the adult US population made a payment using a cryptocurrrency, and Sweden’s Riksbank, showing less than 1% of Swedes made payments using crypto.

The problem of price volatility

One reason for this, and why prices for goods and services are virtually never expressed in crypto, is that most fluctuate wildly in value. A shop or cafe with price labels or a blackboard list of their prices set in Bitcoin could be having to change them every hour.

The following graph from the Bank of International Settlements shows changes in the exchange rate of ten major cryptocurrencies against the US dollar, compared with the Euro and Japan’s Yen, over the past five years. Such volatility negates cryptocurrency’s value as a currency.

Cryptocurrency’s volatile ways

90-day rolling standard deviation of daily returns for major cryptocurrencies compared with the Euro and Yen. The Crypto Multiplier, BIS Working Papers, No. 1104, CC BY

There have been attempts to solve this problem with so-called “stablecoins”. These promise to maintain steady value (usually against the US dollar).

But the spectacular collapse of one of these ventures, Terra, once one of the largest cryptocurrencies, showed the vulnerability of their mechanisms. Even a company with the enormous resources of Facebook owner Meta has given up on its stablecoin venture, Libra/Diem.

This helps explain the failed experiments with making Bitcoin legal tender in the two countries that have tried it: El Salvador and the Central African Republic. The Central African Republic has already revoked Bitcoin’s status. In El Salvador only a fifth of firms accept Bitcoin, despite the law saying they must, and only 5% of sales are paid in it.

Storing value, hedging against inflation

If Bitcoin’s isn’t used for payments, what use does it have?

The major attraction – one endorsed by mainstream financial publications – is as a store of value, particularly in times of inflation, because Bitcoin has a hard cap on the number of coins that will ever be “mined”.

As Forbes writers argued a few weeks ago:

In terms of quantity, there are only 21 million Bitcoins released as specified by the ASCII computer file. Therefore, because of an increase in demand, the value will rise which might keep up with the market and prevent inflation in the long run.

The only problem with this argument is recent history. Over the course of 2022 the purchasing power of major currencies (US, the euro and the pound) dropped by about 7-10%. The purchasing power of a Bitcoin dropped by about 65%.

Speculation or gambling?

Bitcoin’s price has always been volatile, and always will be. If its price were to stabilise somehow, those holding it as a speculative punt would soon sell it, which would drive down the price.

But most people buying Bitcoin essentially as a speculative token, hoping its price will go up, are likely to be disappointed. A BIS study has found the majority of Bitcoin buyers globally between August 2015 and December 2022 have made losses.

The “market value” of all cryptocurrencies peaked at US$3 trillion in November 2021. It is now about US$1 trillion.

Bitcoins’s highest price in 2021 was about US$60,000; in 2022 US$40,000 and so far in 2023 only US$30,000. Google searches show that public interest in Bitcoin also peaked in 2021. In the US, the proportion of adults with internet access holding cryptocurrencies fell from 11% in 2021 to 8% in 2022.

UK government research published in 2022 found that 52% of British crypto holders owned it as a “fun investment”, which sounds like a euphemism for gambling. Another 8% explicitly said it was for gambling.

The UK parliament’s Treasury Committee, a group of MPs who examine economics and financial issues, has strongly recommended regulating cryptocurrency as form of gambling rather than as a financial product. They argue that continuing to treat “unbacked crypto assets as a financial service will create a ‘halo’ effect that leads consumers to believe that this activity is safer than it is, or protected when it is not”.

Whatever the merits of this proposal, the UK committtee’s underlying point is solid. Buying crypto does have more in common with gambling than investing. Proceed at your own risk, and and don’t “invest” what you can’t afford to lose.

John Hawkins, Senior Lecturer, Canberra School of Politics, Economics and Society, University of Canberra

This article is republished from The Conversation under a Creative Commons license. 

More than 1.5 million Americans lost Medicaid coverage in the spring of 2023 due to the end of pandemic policies – and paperwork problems

Medicaid helps millions of low-income Americans get health care. skynesher/E+ via Getty Images
Maithreyi Gopalan, Penn State

At least 1.5 million Americans lost Medicaid coverage in April, May and the first three weeks of June 2023, according to the Kaiser Family Foundation, a nonprofit that tracks health data.

Because only 25 states had publicly reported this data as of June 22, the actual number of people who lost coverage through Medicaid, the government’s main health insurance program for low-income people and people with certain disabilities, is surely much higher.

This swift decline in Medicaid enrollment follows a huge increase that started in early 2020 and was brought about by temporary policy changes in effect for the first three years of the COVID-19 pandemic. During that time, the federal government didn’t let states, which administer Medicaid, drop anyone from the program – even if their income grew too high to qualify.

As of January 2023, the most recent month for which full data is available, a total of 93 million Americans were insured through either Medicaid or the Children’s Health Insurance Program, known as CHIP, a related program. That marked a 30.7% increase from February 2020.

The federal government has estimated that 15 million people will lose their coverage, including 5.3 million children, by mid-2024 due to the end of the continuous enrollment policy.

Public health emergency over

The sharp spike in Medicaid enrollment stopped abruptly because the U.S. COVID-19 pandemic public health emergency status has expired.

States now must phase out their continuous enrollment policies, but they are doing it on different schedules. Some began in April 2023; others started to send out termination letters in May or June. There are also states that will not begin this process until later in the year or are taking steps to minimize the number of people losing their coverage.

For about 3 in 4 of the people who lost their Medicaid coverage, it was for procedural reasons, such as not filing required paperwork. The remaining 1 in 4 probably became ineligible due to an increase in their income.

Gains from Medicaid

There is mounting evidence that Medicaid has many benefits for society – especially children.

For example, when low-income families remain in the program for long periods of time, they tend to have lower child mortality rates. Medicaid coverage is also associated with kids faring better in school.

Researchers have also determined that the federal government and state governments can get boosts in tax revenue when families obtain this health insurance coverage through Medicaid and CHIP that exceed government spending on these programs. That’s because having better access to health care in the long term is associated with being healthier, staying in school longer and eventually earning a higher income.

The toll that the steep decline in health insurance coverage now underway will take on Americans remains to be seen.

Maithreyi Gopalan, Assistant Professor of Education and Public Policy, Penn State

This article is republished from The Conversation under a Creative Commons license.

The folly of making art with text-to-image generative AI

Obtaining a desired image can be a long exercise in trial and error. OpenAI
Ahmed Elgammal, Rutgers University

Making art using artificial intelligence isn’t new. It’s as old as AI itself.

What’s new is that a wave of tools now let most people generate images by entering a text prompt. All you need to do is write “a landscape in the style of van Gogh” into a text box, and the AI can create a beautiful image as instructed.

The power of this technology lies in its capacity to use human language to control art generation. But do these systems accurately translate an artist’s vision? Can bringing language into art-making truly lead to artistic breakthroughs?

Engineering outputs

I’ve worked with generative AI as an artist and computer scientist for years, and I would argue that this new type of tool constrains the creative process.

When you write a text prompt to generate an image with AI, there are infinite possibilities. If you’re a casual user, you might be happy with what AI generates for you. And startups and investors have poured billions into this technology, seeing it as an easy way to generate graphics for articles, video game characters and advertisements.

Grid of many images of cartoon women in various costumes.
Generative AI is seen as a promising tool for coming up with video game characters. Benlisquare/Wikimedia Commons, CC BY-SA

In contrast, an artist might need to write an essaylike prompt to generate a high-quality image that reflects their vision – with the right composition, the right lighting and the correct shading. That long prompt is not necessarily descriptive of the image but typically uses lots of keywords to invoke the system of what’s in the artist’s mind. There’s a relatively new term for this: prompt engineering.

Basically, the role of an artist using these tools is reduced to reverse-engineering the system to find the right keywords to compel the system to generate the desired output. It takes a lot of effort, and much trial and error, to find the right words.

AI isn’t as intelligent as it seems

To learn how to better control the outputs, it’s important to recognize that most of these systems are trained on images and captions from the internet.

Think about what a typical image caption tells about an image. Captions are typically written to complement the visual experience in web browsing.

For example, the caption might describe the name of the photographer and the copyright holder. On some websites, like Flickr, a caption typically describes the type of camera and the lens used. On other sites, the caption describes the graphic engine and hardware used to render an image.

So to write a useful text prompt, users need to insert many nondescriptive keywords for the AI system to create a corresponding image.

Today’s AI systems are not as intelligent as they seem; they are essentially smart retrieval systems that have a huge memory and work by association.

Artists frustrated by a lack of control

Is this really the sort of tool that can help artists create great work?

At Playform AI, a generative AI art platform that I founded, we conducted a survey to better understand artists’ experiences with generative AI. We collected responses from over 500 digital artists, traditional painters, photographers, illustrators and graphic designers who had used platforms such as DALL-E, Stable Diffusion and Midjourney, among others.

Only 46% of the respondents found such tools to be “very useful,” while 32% found them somewhat useful but couldn’t integrate them to their workflow. The rest of the users – 22% – didn’t find them useful at all.

The main limitation artists and designers highlighted was a lack of control. On a scale 0 to 10, with 10 being most control, respondents described their ability to control the outcome to be between 4 and 5. Half the respondents found the outputs interesting, but not of a high enough quality to be used in their practice.

When it came to beliefs about whether generative AI would influence their practice, 90% of the artists surveyed thought that it would; 46% believed that the effect would be a positive one, with 7% predicting that it would have a negative effect. And 37% thought their practice would be affected but weren’t sure in what way.

The best visual art transcends language

Are these limitations fundamental, or will they just go away as the technology improves?

Of course, newer versions of generative AI will give users more control over outputs, along with higher resolutions and better image quality.

But to me, the main limitation, as far as art is concerned, is foundational: it’s the process of using language as the main driver in generating the image.

Visual artists, by definition, are visual thinkers. When they imagine their work, they usually draw from visual references, not words – a memory, a collection of photographs or other art they’ve encountered.

When language is in the driver’s seat of image generation, I see an extra barrier between the artist and the digital canvas. Pixels will be rendered only through the lens of language. Artists lose the freedom of manipulating pixels outside the boundaries of semantics.

Grid of different cartoon images of an animal with wings.
The same input can lead to a range of random outputs. OpenAI/Wikimedia Commons

There’s another fundamental limitation in text-to-image technology.

If two artists enter the exact same prompt, it’s very unlikely that the system will generate the same image. That’s not due to anything the artist did; the different outcomes are simply due the AI’s starting from different random initial images.

In other words, the artist’s output is boiled down to chance.

Nearly two-thirds of the artists we surveyed had concerns that their AI generations might be similar to other artists’ works and that the technology does not reflect their identity – or even replaces it altogether.

The issue of artist identity is crucial when it comes to making and recognizing art. In the 19th century, when photography started to become popular, there was a debate about whether photography was a form of art. It came down to a court case in France in 1861 to decide whether photography could be copyrighted as an art form. The decision hinged on whether an artist’s unique identity could be expressed through photographs.

Those same questions emerge when considering AI systems that are taught with the internet’s existing images.

Before the emergence of text-to-image prompting, creating art with AI was a more elaborate process: Artists usually trained their own AI models based on their own images. That allowed them to use their own work as visual references and retain more control over the outputs, which better reflected their unique style.

Text-to-image tools might be useful for certain creators and casual everyday users who want to create graphics for a work presentation or a social media post.

But when it comes to art, I can’t see how text-to-image software can adequately reflect the artist’s true intentions or capture the beauty and emotional resonance or works that grip viewers and makes them see the world anew.

Ahmed Elgammal, Professor of Computer Science and Director of the Art & AI Lab, Rutgers University

This article is republished from The Conversation under a Creative Commons license. 

Friday, June 23, 2023

Proving Fermat’s last theorem: 2 mathematicians explain how building bridges within the discipline helped solve a centuries-old mystery

Andrew Wiles, the mathematician who presented a proof of Fermat’s last theorem back in 1993, stands next to the famous result. AP Photo/Charles Rex Arbogast
Maxine Calle, University of Pennsylvania and David Bressoud, Macalester College

On June 23, 1993, the mathematician Andrew Wiles gave the last of three lectures detailing his solution to Fermat’s last theorem, a problem that had remained unsolved for three and a half centuries. Wiles’ announcement caused a sensation, both within the mathematical community and in the media.

Beyond providing a satisfying resolution to a long-standing problem, Wiles’ work marks an important moment in the establishment of a bridge between two important, but seemingly very different, areas of mathematics.

History demonstrates that many of the greatest breakthroughs in math involve making connections between seemingly disparate branches of the subject. These bridges allow mathematicians, like the two of us, to transport problems from one branch to another and gain access to new tools, techniques and insights.

What is Fermat’s last theorem?

Fermat’s last theorem is similar to the Pythagorean theorem, which states that the sides of any right triangle give a solution to the equation x2 + y2 = z2 .

An animated gif. The statement of the Pythagorean Theorem is in the upper left. A purple right triangle appears with sides labeled a,b,c. Small red and green squares appear along the sides of the triangle, illustrating the Pythagorean Theorem.
The Pythagorean theorem, named for the Ancient Greek philosopher Pythagorus, is a fundamental result in Euclidean geometry that relates the lengths of the sides of a right triangle. AmericanXplorer13 via Wikimedia Commons, CC BY-SA 3.0

Every differently sized triangle gives a different solution, and in fact there are infinitely many solutions where all three of x, y and z are whole numbers – the smallest example is x=3, y=4 and z=5.

Fermat’s last theorem is about what happens if the exponent changes to something greater than 2. Are there whole-number solutions to x3 + y3 = z3 ? What if the exponent is 10, or 50, or 30 million? Or, most generally, what about any positive number bigger than 2?

A painted portrait of a man with long dark hair, wearing a dark robe
The French mathematician Pierre de Fermat. Rolland Lefebvre via Wikimedia Commons

Around the year 1637, Pierre de Fermat claimed that the answer was no, there are no three positive whole numbers that are a solution to xn + yn = zn for any n bigger than 2. The French mathematician scribbled this claim into the margins of his copy of a math textbook from ancient Greece, declaring that he had a marvelous proof that the margin was “too narrow to contain.”

Fermat’s purported proof was never found, and his “last theorem” from the margins, published posthumously by his son, went on to plague mathematicians for centuries.

Searching for a solution

For the next 356 years, no one could find Fermat’s missing proof, but no one could prove him wrong either – not even Homer Simpson. The theorem quickly gained a reputation for being incredibly difficult or even impossible to prove, with thousands of incorrect proofs put forward. The theorem even earned a spot in the Guinness World Records as the “most difficult math problem.”

That is not to say that there was no progress. Fermat himself had proved it for n=3 and n=4. Many other mathematicians, including the trailblazer Sophie Germain, contributed proofs for individual values of n, inspired by Fermat’s methods.

But knowing Fermat’s last theorem is true for certain numbers isn’t enough for mathematicians – we need to know it’s true for infinitely many of them. Mathematicians wanted a proof that would work for all numbers bigger than 2 at once, but for centuries it seemed as though no such proof could be found.

However, toward the end of the 20th century, a growing body of work suggested Fermat’s last theorem should be true. At the heart of this work was something called the modularity conjecture, also known as the Taniyama-Shimura conjecture.

A bridge between two worlds

A blue line swoops down from the top-right, curves out a sideways-U, then swoops down to the lower-left
A graph of an elliptic curve. Googolplexian1221, CC BY-SA 4.0, via Wikimedia Commons

The modularity conjecture proposed a connection between two seemingly unrelated mathematical objects: elliptic curves and modular forms.

Elliptic curves are neither ellipses nor curves. They are doughnut-shaped spaces of solutions to cubic equations, like y2 = x3 – 3x + 1.

A modular form is a kind of function which takes in certain complex numbers – numbers with two parts: a real part and an imaginary part – and outputs another complex number. What makes these functions special is that they are highly symmetrical, meaning there are lots of conditions on what they can look like.

A circle containing curving black stripes against other colors, mostly yellow, green, and blue.
The symmetries of a modular form can be seen in how it transforms a disc. Linas Vepstas, CC BY-SA 3.0, via Wikimedia Commons

There is no reason to expect that those two concepts are related, but that is what the modularity conjecture implied.

Finally, a proof

The modularity conjecture doesn’t appear to say anything about equations like xn + yn = zn . But work by mathematicians in the 1980s showed a link between these new ideas and Fermat’s old theorem.

First, in 1985, Gerhard Frey realized that if Fermat was wrong and there could be a solution to xn + yn = zn for some n bigger than 2, that solution would produce a peculiar elliptic curve. Then Kenneth Ribet showed in 1986 that such a curve could not exist in a universe where the modularity conjecture was also true.

Their work implied that if mathematicians could prove the modularity conjecture, then Fermat’s last theorem had to be true. For many mathematicians, including Andrew Wiles, working on the modularity conjecture became a path to proving Fermat’s last theorem.

Wiles worked for seven years, mostly in secret, trying to prove this difficult conjecture. By 1993, he was close to having a proof of a special case of the modularity conjecture – which was all he needed to prove Fermat’s last theorem.

He presented his work in a series of lectures at the Isaac Newton Institute in June 1993. Though subsequent peer review found a gap in Wiles’ proof, Wiles and his former student Richard Taylor worked for another year to fill in that gap and cement Fermat’s last theorem as a mathematical truth.

Lasting consequences

The impacts of Fermat’s last theorem and its solution continue to reverberate through the world of mathematics. In 2001, a group of researchers, including Taylor, gave a full proof of the modularity conjecture in a series of papers that were inspired by Wiles’ work. This completed bridge between elliptic curves and modular forms has been – and will continue to be – foundational to understanding mathematics, even beyond Fermat’s last theorem.

Wiles’ work is cited as beginning “a new era in number theory” and is central to important pieces of modern math, including a widely used encryption technique and a huge research effort known as the Langlands Program that aims to build a bridge between two fundamental areas of mathematics: algebraic number theory and harmonic analysis.

Although Wiles worked mostly in isolation, he ultimately needed help from his peers to identify and fill in the gap in his original proof. Increasingly, mathematics today is a collaborative endeavor, as witnessed by what it took to finish proving the modularity conjecture. The problems are large and complex and often require a variety of expertise.

Andrew Wiles on winning the Abel Prize, a high honor in mathematics, in 2016 for his work on Fermat’s last theorem.

So, finally, did Fermat really have a proof of his last theorem, as he claimed? Knowing what mathematicians know now, many of us today don’t believe he did. Although Fermat was brilliant, he was sometimes wrong. Mathematicians can accept that he believed he had a proof, but it’s unlikely that his proof would stand up to modern scrutiny.

Maxine Calle, Ph.D. Candidate in Mathematics, University of Pennsylvania and David Bressoud, Professor Emeritus of Mathematics, Macalester College

This article is republished from The Conversation under a Creative Commons license.

Check out