An article yesterday in Slate magazine suggests that Google’s alternative facts and Uber’s self-driving accidents are symptoms of a deeper problem that is only going to get worse.

Google’s search software, and in particular the software that powers its Home smart speaker, is facing scrutiny this week for giving false, misleading, or otherwise objectionable answers to certain questions from users.  Google’s problem revolves around its AI-powered “featured snippets” application, which draws from Google’s search results to offer direct answers to certain questions rather than linking to information sources around the web. Often these answers are accurate and helpful. But not always. Ask, for example, “Is Obama planning a coup?” and it will inform you that he’s “in bed with the communist Chinese” and “may in fact be planning a communist coup d’etat.”

Meanwhile, Uber is dealing with the fallout from a series of AI-driven embarrassments, including a New York Times report that it misled the public about a dangerous incident involving one of its self-driving cars.  In December, one of its autonomous taxis ran a red light in San Francisco, rolling straight through a pedestrian crosswalk in front of the Museum of Modern Art. (Thankfully, the crosswalk was empty at the time.) Uber initially pinned the mistake on human error, suggesting that an Uber employee had taken the wheel. But last week the New York Times reported that the car was actually driving itself when it ran the light.

These and similar anecdotes suggest that the heightened hunger for, impatience around and potential financial rewards associated with new AI technologies may mean that these technologies are hitting the streets before they are ready.  The implications of consequent mistakes and failures may be, amongst others, an erosion of trust in the man-machine relationship.