Home → Magazine Archive → February 2016 (Vol. 59, No. 2) → Accountability in Algorithmic Decision Making → Abstract

Accountability in Algorithmic Decision Making

By Nicholas Diakopoulos

Communications of the ACM, Vol. 59 No. 2, Pages 56-62
10.1145/2844110

[article image]


back to top 

Every fiscal quarter, automated writing algorithms churn out thousands of corporate earnings articles for the Associated Press based on little more than structured data. Companies such as Automated Insights, which produces the articles for the AP, and Narrative Science can now write straight news articles in almost any domain that has clean and well-structured data: finance, sure, but also sports, weather, and education, among others. The articles are not cardboard either; they have variability, tone, and style, and in some cases readers even have difficulty distinguishing the machine-produced articles from human-written ones.4

It is difficult to argue with the scale, speed, and labor-saving cost advantage that such systems afford. But the trade-off for media organizations appears to be nuance and accuracy. A quick search on Google for "'generated by Automated Insights' correction'" yields results for thousands of articles that were automatically written, published, and then had to have corrections issued. The errors range from relatively innocuous ones about where a company is based, to more substantial wrong word choices—missing instead of beating earnings expectations, for example. Were any of these market-moving errors? Was the root cause bad data, a faulty inference, or sloppy engineering? What is the right way to post corrections?

0 Comments

No entries found