Artificial intelligence (AI) has potential to increase global economic activity in the industrial sector by $13 trillion by 2030.6 However, this potential remains largely untapped because of a lack of access to or a failure to effectively leverage data across companies borders.10 AI technologies benefit from large amounts of representative data—often more data than a single company possesses. It is especially challenging to achieve good AI performance in industrial settings with unexpected events or critical system states that are, by definition, rare. Industrial examples are early detections of outages in power systems or predicting machine faults and remaining useful life, for which robust inference is often precluded.
A solution is to implement cross-company AI technologies that have access to data from a large cross-company sample. This approach can effectively compile large-scale representative datasets that allow for robust inference. In principle, this could be achieved through data sharing. However, due to confidentiality and risk concerns, many companies remain reluctant to share data directly—despite the potential benefits.10 In some cases, data sharing is also precluded by privacy laws (for example, when involving data from individuals). Likewise, sharing code for AI models among companies has other drawbacks. In particular, it prevents that AI learns from large-scale, cross-company data, and, hence, potential performance gains from cross-company collaboration would be restrained.