The workloads running in the modern data centers of large scale Internet service providers (such asAlibaba, Amazon, Baidu, Facebook, Google, and Microsoft) support billions of users and span globallydistributed infrastructure. Yet, the devices used in modern data centers fail due to a variety of causes, fromfaulty components to bugs to misconfiguration. Faulty devices make operating large scale data centerschallenging because the workloads running in modern data centers consist of interdependent programsdistributed across many servers, so failures that are isolated to a single device can still have a widespreadeffect on a workload.In this dissertation, we measure and model the device failures in a large scale Internet service company,Facebook. We focus on three device types that form the foundation of Internet service data centerinfrastructure: DRAM for main memory, SSDs for persistent storage, and switches and backbone linksfor network connectivity. For each of these device types, we analyze long term device failure data brokendown by important device attributes and operating conditions, such as age, vendor, and workload. Wealso build and release statistical models of the failure trends for the devices we analyze.For DRAM devices, we analyze the memory errors in the entire fleet of servers at Facebook over thecourse of fourteen months, representing billions of device days of operation. The systems we examinecover a wide range of devices commonly used in modern servers, with DIMMs that use the modernDDR3 communication protocol, manufactured by 4 vendors in capacities ranging from 2GB to 24GB.We observe several new reliability trends for memory systems that have not been discussed before inliterature, develop a model for memory reliability, show how system design choices such as using lowerdensity DIMMs and fewer cores per chip can reduce failure rates of a baseline server by up to 57.7%.We perform the first implementation and real-system analysis of page offlining at scale, on a cluster ofthousands of servers, identify several real-world impediments to the technique, and show that it canreduce memory error rate by 67%. We also examine the efficacy of a new technique to reduce DRAMfaults, physical page randomization, and examine its potential for improving reliability and its overheads.For SSD devices, we perform a large scale study of flash-based SSD reliability at Facebook. We analyzedata collected across a majority of flash-based solid state drives over nearly four years and manymillions of operational hours in order to understand failure properties and trends of flash-based SSDs.Our study considers a variety of SSD characteristics, including: the amount of data written to and readfrom flash chips; how data is mapped within the SSD address space; the amount of data copied, erased,and discarded by the flash controller; and flash board temperature and bus power. Based on our fieldanalysis of how flash memory errors manifest when running modern workloads on modern SSDs, we make several major observations and find that SSD failure rates do not increase monotonically with flashchip wear, but instead they go through several distinct periods corresponding to how failures emerge andare subsequently detected.For network devices, we perform a large scale, longitudinal study of data center network reliabilitybased on operational data collected from the production network infrastructure at Facebook. Our studycovers reliability characteristics of both intra and inter data center networks. For intra data center networks,we study seven years of operation data comprising thousands of network incidents across twodifferent data center network designs, a cluster network design and a state-of-the-art fabric network design.For inter data center networks, we study eighteen months of recent repair tickets from the field tounderstand the reliability of Wide Area Network (WAN) backbones. In contrast to prior work, we studythe effects of network reliability on software systems, and how these reliability characteristics evolve overtime. We discuss the implications of network reliability on the design, implementation, and operation oflarge scale data center systems and how the network affects highly-available web services.Our key conclusion in this dissertation is that we can gain a deep understanding of why devicesfail—and how to predict their failure—using measurement and modeling. We hope that the analysis,techniques, and models we present in this dissertation will enable the community to better measure,understand, and prepare for the hardware reliability challenges we face in the future.