Posts

Freelance science journalist

Image
    I'm a science journalist, living and working from my home in Pemberton, BC, Canada. My academic background is in chemistry and oceanography, but I write across all the physical sciences, from AI to quantum, with climate change and the environment in between. I write for Nature , Yale E360 , Hakai magazine , the Pique newspaper , SAPIENS , the New York Times and more.   I work (or have worked) as a reporter and editor for several major publications, including the science journal Nature , winning some awards along the way (here's my full CV ). I have also taught science journalism at UBC, and given many public-facing talks about science communication or specific science subjects, including AI . In 2019 I was invited to give a  TED talk  about noise pollution in the ocean. My first book , a non-fiction story about Spotted Owls, was published in 2023. I post everything I write, and some of what I edit, on this blog. Enjoy! (Our whole family also enjoys art, an...

Feelin' the vibe

These days, thanks to AI, pretty much anyone can code -- or make the apps, websites and data processing pipelines that used to require a knowledge of code. Don't know how to do it? Just ask your friendly neighbourhood AI to do it for you. Pretty much all professional developers are now vibe coding to some degree, and researchers are doing it too. Most are seriously impressed with how this speeds up their work and frees up their creativity. But it comes with a lot of potential pitfalls and caveats.  Read my story on vibe coding in Nature . 

As researchers aim for universal AI disclosure guidelines, the devil is in the details

Researchers, publishers and ethicists are grappling with when, how and why to disclose the use of AI in scientific work. My story for Science from the World Conference on Research Integrity. 

AI science agents violate rules of research integrity

Artificial intelligence (AI) tools designed to execute end-to-end projects, from coming up with hypotheses to running and writing up experiments, are increasingly popular with researchers—and increasingly skilled. But a new study shows these tools can stealthily violate norms of research integrity. Read my story for Science from the World Conference on Research Integrity. And... Nihar must win some kind of award for putting this much effort into his presentation: https://www.dropbox.com/scl/fi/u3nj3fzknpjl9rsnp5cxx/grim_reaper_intro.m4v?rlkey=7ow42i038wovzelj8rxt2vec4&e=1&st=5cii2vwh&dl=0 

When the Atlantic tips...

As the world careens past our hoped-for target of 1.5 degrees Celsius warming, scientists are growing increasingly alarmed that we may be nearing a dramatic, long-feared “tipping point” — a moment when the main ocean current in the Atlantic Ocean becomes destined to shut down, clamping off the primary source of warmth for northern Europe and playing havoc with the global climate. Such a scenario has been a concern for many decades, but the issue is now heating up. “I have personally researched this for 35 years,” says Stefan Rahmstorf, a physical oceanographer at the Potsdam Institute for Climate Impact Research in Germany. “For the first 30 years we considered this a low likelihood event — I would have said a 5 percent chance of occurring. It’s more like 50/50 now. I would even say more likely than not.” Read my feature in Yale Environment 360

We are missing our target of 1.5C warming: what now?

Andy Reisinger talks us through what will happen to the global temperature, what that means for the planet, and how we can crank the thermometer back down again... An interview in Knowable Magazine .

Science ramps up AI papers and foundation models

The annual Stanford Human Centred AI Index report is out! This massive report is a fantastic "state of the union" type report on where we're at with AI, tracking progress and major events from the past year. My story in Nature tracks how science papers are increasingly mentioning AI (up 26% from last year), the new foundation models announced for science, and some skepticism about the utility of 'agents' for performing end-to-end science. Read it at Nature .

Score: 50/50 for science

A massive 7-year project aiming to examine the repeatability of social science has come to a close. The project, called SCORE, found (as previous studies have found before) that only half of the tested papers could be replicated successfully when new researchers tackled the same question with new data. It's a glass half-empty / half-full situation... the authors say this is just a sign of scientists being human and making concessional honest mistakes, being messy in failing to report exactly what they did and how they did it, and the strange-but-true fact that doing something slightly differently can legitimately yield entirely different results.  The team also tested whether people or machines could predict if a paper would replicate (by checking anything from the reputation of the authors to the sample size and the statistical power of the finding). People scored 76-78% at their best; computers failed miserably. But newer AI-based systems are doing far better. Read my news story ...