As innovative as the use of AI might be, there are occasional reminders that when it goes wrong, it can go very wrong. Apples AI-generated news alert feature is definitely one of those reminders! After widespread criticism from users who complained that they were being served news headline summaries containing regular errors, Apple have suspended the feature.
One of the groups complaining about the feature, includes the BBC, after an AI headline summary was sent to some readers, falsely indicating that the man accused of killing United Healthcare’s CEO, Brian Thompson, had shot himself.
The feature had also falsely summarised the headlines of other news media outlets, too, and the summaries looked as if they were coming from the news outlet apps themselves, which is a massive problem, with the potential to further erode a lot of the public’s trust in the news, and journalism in general.
These false summaries are due to something referred to as ‘hallucinations’ where AI models simply make something up. As yet, there’s no real way of being able to guarantee that AI output is hallucination-free, without being checked by humans.
With leaps forward in the Artificial Intelligence world happening so quickly, it makes sense that companies want to be the first to put their models and uses out there – but there needs to be a balance, to ensure hasty roll-outs don’t spread misinformation, potentially causing harm in the process.
In a rare U-turn decision, Apple has removed this feature, and will re-introduce it in a future update when they can be more confident in its summarising accuracy.
What do you think? Is it just a little mistake? Or should companies rolling out AI features have a greater deal of accountability for what they’re putting out into the world? Join us over on LinkedIn to join the discussion.
Further reading: Will Tiktok be Banned in the UK?