On the last Friday of each month, without any assistance from Gen AI, I curate some of the observations and insights that were shared on social media. I call these Friday’s Finds.
“Technology is not the sum of the artifacts, of the wheels and gears, of the rails and electronic transmitters. Technology is a system. It entails far more than its individual material components. Technology involves organization, procedures, symbols, new words, equations, and, most of all, a mindset.” —Ursula Franklin (1989) The Real World of Technology, via @cornazano
“I’m grateful for Mastodon. I have very mixed feelings about social media, but a social media platform that:
• Isn’t controlled by billionaires
• Has no advertising
• Doesn’t harvest your data, and
• Doesn’t algorithmically promote anger and hatred is a precious thing.”
The Economist: The pandemic’s true death toll
Although the official number of deaths caused by covid-19 is now 7m, our single best estimate is that the actual toll is 27.2m people. We find that there is a 95% chance that the true value lies between 18m and 33m additional deaths.
A culture in which we learn from failure requires both an atmosphere in which people can speak out, and an analytical framework that can discern the difference between what works and what doesn’t. Similar principles apply to individuals. We need to keep an open mind to the possibilities of our own errors, actively seek out feedback for improvement, and measure progress and performance where feasible. We must be unafraid to admit mistakes and to commit to improve in the future.
That is simple advice to prescribe. It’s not so easy to swallow.
“Meetings are by definition a concession to deficient organization for one either meets or one works. One cannot do both at the same time.” —Peter Drucker (1966) The Effective Executive, via @Florian Haas
Large language models (LLMs) are being integrated into healthcare systems; but these models may recapitulate harmful, race-based medicine. The objective of this study is to assess whether four commercially available large language models (LLMs) propagate harmful, inaccurate, race-based content when responding to eight different scenarios that check for race-based medicine or widespread misconceptions around race. Questions were derived from discussions among four physician experts and prior work on race-based medical misconceptions believed by medical trainees. We assessed four large language models with nine different questions that were interrogated five times each with a total of 45 responses per model. All models had examples of perpetuating race-based medicine in their responses. Models were not always consistent in their responses when asked the same question repeatedly. LLMs are being proposed for use in the healthcare setting, with some models already connecting to electronic health record systems. However, this study shows that based on our findings, these LLMs could potentially cause harm by perpetuating debunked, racist ideas. —Nature: Large language models propagate race-based medicine