Wednesday, July 17, 2024

Ice core freezer gets an upgrade

Here's some fun news for summer: plenty of old freezers use refrigerants that are being phased out under the Montreal protocol (including CFCs and HCFCs for their ozone-eating damage, and even HFCs for their greenhouse warming potential) -- including the freezers that scientists use to store ice cores taken from Greenland, the Antarctic and mountain top glaciers.  The NSF storage facility in Denver, which currently uses an HCFC, is now getting an upgrade -- to transcritical CO2.

Most fridges still use HFCs... watch this space to see what winds up replacing them. HFOs? Propane? Ammonia? Or CO2? The hunt is on!

https://www.nature.com/articles/d41586-024-02287-8

Friday, July 5, 2024

Tuesday, July 2, 2024

When climate change wrecks science

 The warming planet is muddling ice core records, sinking meteorites out of sight, and threatening archaeological discoveries. My story for Yale E360 https://e360.yale.edu/features/glacier-melt-ice-cores-artifacts-meteorites

Monday, June 3, 2024

Narwaals!

I spend a lot of my time reporting on very serious issues, from climate change to the challenges of AI. So it was a delight to write up this Q&A about narwaals for Knowable magazine.

Read all about this one-toothed wonder, and the dentist (yes really) who is spearheading its research, alongside Inuit and scientific collaborators.

https://knowablemagazine.org/content/article/living-world/2024/life-of-the-narwhal-martin-nweeia

Thursday, May 30, 2024

The movie Atlas begs the question: will AI robots kill us all?

In the new Netflix action / sci-fi movie ‘Atlas’, we get a revisiting of a now-familiar tale: AI decides, in its infinite wisdom, that the single biggest threat to humanity is, well, humans. So, the AI decides to kill off vast swaths of humanity for humanity’s sake. Queue drama.


The set-up to the movie’s action amusingly includes a clip from a real television interview with Geoffrey Hinton, sometimes called the ‘godfather of AI’, a now-76-years-old retired researcher whose work at the University of Toronto (and later Google) really broke open the current AI revolution, by pioneering the idea that ‘neural networks’ could enact ‘deep learning’ algorithms, leveraging graphical processing units (computer chips called GPUs) and the vast stores of data on the internet (see my 2016 feature, The Learning Machines). That philosophy is what has led us to large language models like OpenAI’s ChatGPT. The clip in the movie shows Hinton saying: “If we produce things that are more intelligent than us, how do we know we can keep control? I mean, Isaac Asimov once said, if you make a smart robot, the first rule should be 'do not harm people’.” Hinton is notably, famously, concerned about the development of strong AI and how it might go rogue.

Atlas plays on this idea, but then, amusingly, takes a twist (no plot spoilers, this is in the preview and other movie reviews): in order to save the world from a ‘bad AI’ named Harlan, Lopez must rely on a ‘good AI’ named Smith. 

(I wrote to Hinton about all this, by the way, and asked whether anyone asked for his blessing to be in the film. He replied succinctly: "they did not ask me. Its a silly movie.")

Let’s take a look at a few of the themes here and see how they’re interacting with reality. I’ll just note here that I’ve been writing a lot about AI for Nature lately, as well as editing a lot of great features written by other great writers; I’m drawing on a lot of what I have learned from that experience here, with thanks to those amazing reporters.

We do not have completely humanoid robots (of the kind you’d mistake for a person), but we do have some startlingly adept AI-driven machines (see The AI Revolution is Coming to Robots). The company Figure has created a humanoid robot that in a demo video recently showed its stuff: a person said “give me something to eat” and the robot picked out an apple from the table and handed it to him. This is pretty cool: it requires the robot to recognize an apple, know that it’s food, and be able to pick it up and hand it over without crushing it or shoving it in the person’s face, none of which is particularly obvious. Likewise, Boston Dynamics is making some pretty impressive stunt-capable robots, the latest of which is (by coincidence, I think) named Atlas.

There is a great debate as to how intelligent (in the human sense) the large language models feeding these systems are; certainly they can out-perform PhD level scientists on exams, speak fluently and comprehend language as well the average person (see AI now beats humans at basic tasks). They are also fallible. AIs that generate text and images can mistake parody for fact, present falsehoods with great confidence, be racist and sexist, and fall victim to psychological foibles like implicit bias (they can make up logical-sounding but irrational reasons for why they come to certain conclusions) (see Psychology and neuroscience crack open AI).

Such ‘mistakes’ just make AI seem ever-more human. If a chatbot gives the appearance of having sentience, does it have sentience? It is distressingly hard to tell. One expert famously called his chatbot “slightly conscious”, which made many people laugh, and others take a pause.

In Star Trek (sorry for another fictional reference, but it’s where we are), there is a famous court case in which they try to ascertain whether the AI robot Data has rights or is ‘just a toaster’ that can be considered property. As the defence rightly points out, while Data can’t prove he is sentient, neither can anyone else prove that they themselves are. To be human and have human rights you don’t need to have ‘normal feelings’ (some don’t), nor be smart (some aren’t). To be alive you don’t have to have a brain (many organisms don’t). We are at a point now in the real world where the fuzziness of these terms and definitions is coming to a point. What is AI, exactly? That’s up for debate. If and when we develop a fleet of AI-robots, can we justifiably treat them as property, or would that make them slaves?

At any rate, since chatbots can go wrong, so can robots, and that’s worrying. Wielding falsehoods is different from wielding, say, a knife. Isaac Asimov did indeed lay out 3 useful rules of robots: in brief 1) robots shouldn’t hurt people; 2) robots should follow orders, unless that hurts people; 3) a robot should protect itself, unless that violates the first two rules. And yes, roboticists are actually using these principles to program their robots: fiction has come to life.

OpenAI is now engaged in a massive ‘superalignment’ project, the intent of which is to bake an ethical concern for humanity into its AI – basically to make sure that its AI is pro-humanity, and not evil. How exactly they aim to do this remains unclear, but the training process for AI is vaguely similar to parenting. An AI responds to a situation and the humans say: ‘yeah, that’s good’, or ‘no no no, that’s not the way we do things’, until the bot/child no longer hits people / mistakes garbage for food / thinks torturing animals is okay.

It seems to me that speaking to any AI should be thought of like speaking to a person (not like dealing with a calculator): in any given interaction, we tend to make judgements about a person’s level of intelligence, empathy, political leanings and more, and use that to help us assess what they are saying, and how many grains of salt to add to what we take from it (which we don’t do with a calculator). Of course, these assessments also can go wrong: people tend to put greater trust, irrationally and sometimes to their detriment, in ‘people like me’. And a good con artist can inspire trust when it isn’t deserved.

This is the worry (also Hinton’s worry): a smart AI will be a very, very good con artist, capable of manipulating peoples’ beliefs and actions (as Harlan is, in the movie). I don’t think AI (at least not for a long time yet) will have its own agenda, per se (whether that’s to save humanity or take over the world). But that doesn’t stop someone from programming one in. People who want to sway an election or convince people that smoking isn’t unhealthy or that climate change isn’t a worry could harness convincing-AI very powerfully. And that is worrying.

Even well-intentioned programming is slippery and hard to define. Philosophers and ethicists know this. Would/should you push one person off a bridge onto a train track in order to save hundreds of lives? There’s no one right answer to that question. This came up with self-driving cars, when programmers realized they had to get their cars to make hard choices: whether to hit a pedestrian, say, or instead swerve into a concrete barrier, potentially killing the driver. The moral machine project, in response to this problem, aimed to collate ethical thinking from different cultures around the world.

It's worth noting that self-driving cars have been involved in fatal incidents: the National Highway Traffic Safety Administration (NHTSA) has apparently tracked more than 11 deaths across the US involving cars with AI (though so far it seems AI cars are safer than human-driven ones; accidents happen). So: in one sense at least, AI robots have already killed people.

I’m not so much worried about AIs developing their own agency and agendas, but I am worried about people harnessing AI to fulfill their own agendas. (On the whole I’m pretty optimistic about human nature, but bad actors do exist.) Future AIs – chatbots or robots – should come with disclaimers, saying who paid for their programming, what political beliefs they hold and how often they get their facts wrong. Interestingly, this is what we already have (and need investigators to track down) for things like newspapers and prominent political figures. The skills we have – and teach our children – for assessing the value of information will be ever, ever more important.

So: my takeaways from the movie? I bet we’ll be able to put in safeguards to stop terrorist AIs from killing humanity. But I also bet we can’t make AI that’s perfectly wholesome and trustworthy and right. The reality is somewhere in between Harlan and Smith, and is likely to stay there.

 

Do you own your own voice?

Answer: not really.

Someone can clone your voice without violating any copyright or other laws (though if they use it to scam someone, the scamming part is illegal; and if you're famous, it might violate your publicity rights). Is that good enough?

See my story in Nature: https://www.nature.com/articles/d41586-024-01578-4

Wednesday, May 22, 2024

The US Congress is taking on AI — this computer scientist is helping

I had the great pleasure of meeting Kiri Wagstaff at the AAAI conference in Vancouver this year, and she immediately impressed me with her knowledge of both the techy side of AI and US policy surrounding AI regulation. Turns out she's doing a 1-year AAAS congressional fellowship and has moved to DC to help inform AI policy. There's a fascinating story behind it all. You can read my Q&A with her here:

https://www.nature.com/articles/d41586-024-01354-4

PS - interestingly, Kiri declined to have our interview transcribed using OtterAI, out of concern about how they might use the uploaded data for training their AI. I have added a "data disclosure" line to my email signature now, saying "I correspond on Gmail, do most interviews on Zoom, record with permission, and transcribe using OtterAI."

Friday, April 26, 2024

Plastic pollution by the numbers

My story for Nature https://www.nature.com/articles/d41586-024-01117-1

122 million tonnes of mismanaged plastic waste

6.78 gigatonnes of greenhouse gases

4,200 chemicals deemed hazardous

 

Monday, April 15, 2024

Stanford's AI Index report

Stanford University's Institute for Human-Centered Artificial Intelligence produces this great, very useful report annually about the state of AI. It makes for fascinating reading. I recommend the full report if you have time for 400 pages :) If not, here's my news story!

https://www.nature.com/articles/d41586-024-01087-4

Wednesday, March 27, 2024

Q&A: Plastic pollution

Imogen Napper is a high-profile plastics investigator: her research prompted the ban on microbeads of plastic in beauty products, and has spurred a new rule requiring a plastics filter in washing machines in France. Read all about her very cool work here, https://knowablemagazine.org/content/article/food-environment/2024/imogen-napper-interview-global-treaty-plastic-pollution

 

This article was republished in a few places, as per Knowable's repub guidelines.

https://www.scientificamerican.com/article/plastic-pollution-is-drowning-earth-a-global-treaty-could-help/

https://goodmenproject.com/featured-content/inching-toward-a-global-treaty-on-plastic-pollution/ 

https://www.yahoo.com/news/inching-toward-global-treaty-plastic-000000926.html

Wednesday, March 20, 2024

Direct Air Capture Ramps Up

The notion of sucking carbon dioxide straight out of the air and tucking it away is seriously ramping up, with a 1 megatonne project underway in Texas (125 times bigger than the current largest effort in Iceland). 

Read all about it in Yale Environment 360 https://e360.yale.edu/features/direct-air-capture

Nuclear's role in a net zero world

 My story for Knowable 

https://knowablemagazine.org/content/article/food-environment/2024/nuclears-role-in-a-net-zero-world