Abstract

PurposeDue to the complexity of digital services, companies are increasingly forced to offer their services “in permanent beta”, requiring continuous fine-tuning and updating. Complexity makes it extremely difficult to predict when and where the next service disruption will occur. The authors examine what this means for performance measurement in digital service supply chains.Design/methodology/approachThe authors use a mixed-method research design that combines a longitudinal case study of a European digital TV service provider and a system dynamics simulation analysis of that service provider's digital service supply chain.FindingsWith increased levels of complexity, traditional performance measurement methods, focused on detection of software bugs before release, become fragile or futile. The authors find that monitoring the performance of the service after release, with fast mitigation when service incidents are discovered, appears to be superior. This involves organizational change when traditional methods, like quality assurance, become less important.Research limitations/implicationsThe performance of digital services needs to be monitored by combining automated data collection about the status of the service with data interpretation using human expertise. Investing in human expertise is equally important as investing in automated processes.Originality/valueThe authors draw on unique empirical data collected from a digital service provider's struggle with performance measurement of its service over a period of nine years. The authors use simulations to show the impact of complexity on staff allocation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call