Each year in the United States we spend more than $48 billion on the construction of healthcare facilities while engaging a large number of competing stakeholders.1 Financers and planners provide perspectives on cost-effectiveness and zone code constraints. Architects and facilities operators balance the tension between ideal design and practical constraints to operate and maintain the building. Clinicians and patients focus on how the hospital space will function with emphasis on direct patient care and quality of life for providers. Because so many perspectives are integrated into the design process and construction, designs must have clear metrics to communicate their performance to each relevant stakeholder. Despite broad interest in evaluating hospital design, efforts to do so have been variable. For example, stakeholders interested in energy efficiency routinely evaluate kilowatt usage, which can often be attributed to specific design features of a building. In contrast, far less has been done to evaluate the design and the clinical outcomes of patients. The most cited work in healthcare architecture retrospectively reviewed 46 patients undergoing open cholecystectomy procedures.2 Despite its small sample size, lack of risk adjustment, and narrow clinical cohort, the findings—that window views may improve recovery after surgery—have informed an entire generation of hospital designs with large windows. Efforts to further develop a scientific understanding of how other hospital design features may influence clinical outcomes continue to evolve in terms of improving methods and content. We provide an overview of how hospital design can be more rigorously evaluated. We discuss practical barriers to evaluating hospital architecture and share common design features thought to influence care delivery. Finally, we explain how standard health service research techniques commonly applied in clinical studies can and should be used to rigorously evaluate the performance of hospital design. There are several practical reasons why it has been difficult to measure the quality of hospital design. Evaluating the impact of hospital design on clinical outcomes requires data from both design and clinical care. Hospital design data (i.e., blueprints, floorplans) serve their initial purpose during the design and construction phases of the hospital and are later maintained by a limited number of facility engineers or space planning teams. Clinical and patient-level data are available from electronic medical records or insurance claims. Due to the sensitive nature of health records, these are rarely accessible to nonclinician researchers. As such, hospital design and patient-level data are stored in separate locations with different accessibilities. Performing research on the impact of hospital design on clinical outcomes requires bridging two distinct disciplines. Architects and planners specialize in the generation of hospital design and construction but have limited training in the measurement of clinical outcomes. In contrast, healthcare researchers and clinicians are often content experts in clinical processes and quality but have little to no training to integrate hospital design variables into outcomes assessment (e.g., which variables and how they might be treated as an exposure). Taken together, neither group alone is positioned to effectively evaluate the quality of hospital design using clinical outcomes. A common method in architecture for evaluating design quality is the “postoccupancy evaluation”—a systematic evaluation of the building's function after it has been occupied by its tenant. Unfortunately, postoccupancy evaluations have historically been viewed as more harm than help. Architects fear it may expose competence gaps in the design and limit the ability to secure future projects. Hospital systems worry that a postoccupancy evaluation may identify flaws in the design that are too costly to correct, making the process futile. Beyond liability concerns, there is a practical barrier to who should fund the evaluation. As such, fewer than 4% of hospitals receive any formal postoccupancy evaluation.3 Siloed expertise and data between architects and clinicians and the absence of shared financial incentive has limited any momentum for measurement consensus. Thus, postoccupancy hospital evaluations, when performed, use varied quantitative outcomes and methodologic approaches. Furthermore, the use of qualitative data is not standardized, which makes it challenging to compare different postoccupancy evaluations. If architects and clinician researchers could share data and expertise to evaluate hospital design and clinical outcomes, there remain other important stakeholder priorities. Hospital design that prioritizes patient outcomes or clinician preferences may not always align with design feasibility or costs. Moreover, code restrictions or philanthropy specifications may dictate important aspects of hospital design. As such, the association of clinical outcomes with hospital design becomes one of many measures involved in hospital design. Several spatial environment design features may affect processes and outcomes of care. Single and double rooms are believed to have several impacts on the outcomes of patients. First, single rooms likely provide greater privacy for patients, reducing thoughts of anxiety and discomfort.4 Second, double rooms are expected to reduce response times of rapid response events because of the presence of a roommate who may alert staff.5 Third, double rooms are expected to have higher rates of nosocomial infections because of the proximity of patients.6 Access to a window in a patient may improve patient satisfaction due to the presence of a positive distraction.7 While patients with a window view tend to have shorter stays and consume less analgesia, the precise pathway that could explain why outcomes change is still not clear.8 The ability to adjust lighting and mimic a natural light-cycle aids patients in regulating circadian rhythms and minimizing hospital stay disturbances on sleep.9 Because hospitalized patients are often disrupted by their prior behavior patterns, lighting can be an important tool to reorient patients to time and place.10 The volume level patients are exposed to in the hospital may worsen anxiety and disrupt sleep patterns.11 Sound also may contribute to perceptions of privacy and patient comfort during inpatient stays. In any given inpatient floor plan, each room has a variable distance from the main nursing station. It is thought that patients closer to the nursing station may receive more timely care.12 The ability for providers to directly see a patient from either the hallway or the nursing station is referred to as a line of sight. Ideally, a direct line of sight allows providers to more readily identify when a patient needs help in a more timely manner.13 Examples of hospital floorplans highlighting these different features are demonstrated in Figure 1. Of note, when combined, these design features often in many floorplans will create a natural variation in the design exposures (Figure 2). As such, they lend themselves to a natural experiment study design. Using an exposure framework to codify patient rooms or units by their spatial features enables investigators to understand what spatial components may be impacting services. Sources for identifying potential hospital design features can come from a variety of methods. The two most common are using hospital blueprints and in-person walkthroughs. Most design features can readily be identified using hospital blueprints. While blueprints are often not available to the public, hospital facilities and operation departments maintain an up-to-date archive of floor and room plans. Placing blueprints into computer-aided design software allows can facilitate identifying design variables such as square footage, distances, adjacencies, lines of site, and circulation paths. Most floor plans will often include a room number, which will become an important label when attempting linkage to clinical data. Walking through the hospital and codifying blueprints is an additional method to document hospital design features. While architectural drawings are preferred for documentation, hospitals publicly postfire exit plans, which can be initial images for recording design elements. These data, such as room number and design features, can then be translated into numerical formats, which can be linked to clinical data. Ideally, both of these methods should be done together; the objective rigor of blueprint drawing as well as real-time face validity for how the space is actually occupied. Only in-person walkthroughs can capture any spatial nuances, which may not be visible in blueprints. Validating spaces that may have changed recently, or are flex spaces, will avoid inaccuracies that can be documented from solely blueprints. Developing a team of both designers and clinicians to document design elements captures the greatest amount of specificity while maintaining accuracy to the current spaces within the hospital. Once the design features have been identified, they can be linked to clinical data. There are two domains of clinical data that may be particularly relevant: Process of care refers to the actual steps in service to treat a patient.14 Examples of process measures include administering medications or completing measurement and documentation of scheduled vital signs. One may explore whether patients in different room types (e.g., single vs. double) are more likely to be prescribed sleeping aids. A major advantage of processes of care is that they are granular and actionable. Clinical outcomes refer to the end result of care provided.14 Common examples include mortality, complications, and falls. For example, there may be a relationship between inpatient mortality and patient distance from the nursing station with the presumed mechanism that those closer are more closely “watched.” Clinical outcomes enjoy broad face validity with many stakeholders representing the “bottom line.” To fairly measure outcomes and make a meaningful comparison, complex econometric techniques (detailed below) are needed that take into account other data points and potential confounders (e.g., patient comorbid conditions), which may influence a given outcome. As such, it is fairly more resource intensive and requires more technical expertise. Electronic medical records are the primary source for obtaining the process of care and clinical outcomes data. In the most practical sense, the design features of a hospital ward can be linked to patient-level data using the room or bed number they occupy. Such an approach would not be possible, however, in other commonly used data sets (e.g., clinical registries or administrative claims) that do not specify the granular location of care. Future work on developing databases that link design elements to clinical across hospitals and providers would significantly expand the data available for performing hospital design research. Multiple health services research techniques exist to identify relationships between exposures and clinical outcomes. These methods can be applied to rigorously evaluate the association between hospital design features and clinical care delivery and outcomes. Risk adjustment is an econometric technique that builds a logistic regression model taking into account known variables that influence clinical outcomes such as age, gender, and comorbidities.15 By incorporating relevant covariates into the regression, the impact of hospital design is isolated with respect to confounding variables. For example, hospitalists often have a knowledge of where staff are frequently working, and at what scale a team of providers may work. Using the floor or unit as a covariate offers a potential control for variation across clinical teams. Performing risk adjustment requires a significant amount of data on patients and clinicians and can be labor-intensive. Standard cohorts observe patients who share similar traits that could confound the association between exposures and outcomes during a comparison. Identifying patients by procedure and age (and other variables, if available) is a method of controlling for confounding variables when information is limited. The consequence of standard cohorts is it often reduces the sample size of patients to such an extent, the sample is no longer large enough (i.e., lacking “statistical power”) to detect meaningful differences in outcomes across design exposures. Health services research has developed a robust list of relevant covariates to include when investigating outcomes. However, it is less clear which variables are relevant in explaining causality for hospital design or are simply redundant. Testing the variance inflation factor for each variable can identify which ones are collinear that should be removed to avoid overfitting a risk-adjusted model. Propensity score matching methodologically adjusts for pathways in which receiving treatment can also be explained by covariates. Applying propensity score matching is another approach similar to logistic regression in accounting for covariates that may influence the outcome. Rather than “adjust the outcome,” propensity score matching attempts to “adjust the cohort” upfront before comparisons are made.16 This approach is particularly useful when outcomes are sparse relative to the number of covariates. Hierarchical modeling can be used as a methodology to understand sources of variation that may occur across several observational scales that “nest” within each other. For example, evaluating complication rates using a hierarchical model would nest patients within rooms, within doctors, within nursing units, within floors, and within hospitals. By evaluating the observed variation that occurs at each scale, a hierarchical model can help identify the level at which differences seem to be occurring while addressing additional confounders. Step wedge design refers to a progressive exposure timeline for different cohorts of patients. Using step wedge design allows researchers to control for secular trends which may occur between time and exposure combinations. Consider the example changes to the lighting in a patient room on a given floor. Rather than change them all at one time, a step wedge design would favor changing a segment of rooms each month over a year. In doing so, changes in observed outcomes can be evaluated if they occurred after that segment of rooms was exposed, or if there was a secular trend (e.g., seasonality) influencing all the rooms independent of the lighting change. While these methods are widely available and commonly used in health services research, their application to evaluating hospital design has been limited. We have outlined several issues involving hospital design research, sources for generating data, and methodologies to investigate hospital design element impacts on clinical processes and outcomes. Because hospital design spans many disciplines and stakeholders, several other steps could be taken to improve the implementation of hospital design research into hospital development. Clinical data, with the granularity necessary for health design research, often must come from a health system's electronic medical record. Collaborations and initiatives to improve the spatial data of processes and outcomes, in addition to their availability, will provide a greater pool of available resources to investigate hospital design. Architectural data can be derived from sharing the blueprints of hospitals, with eventual standardized, coded design elements. Hospital design collaborations could readily provide design element data that could be directly applied to health services research. Such a collaboration would open the door for clinical and architectural collaborations for advancing the knowledge base on hospital design quality. At present, very little financial incentive exists to evaluate the design of the hospital after it is constructed and occupied. Multiple stakeholders, such as the hospital leadership or insurance payers, could require a postoccupancy evaluation to be performed. To give the evaluation more emphasis beyond a requirement, it could be linked to a percent payment of the overall construction and design costs. Doing so would mirror the value-based payment plans commonly applied to clinical providers and hospitals from payers. Educational opportunities that allow designers to understand clinical processes and data, and clinicians to understand design and development are incredibly rare. Sharing expertise across these domains would leap architects closer to the elusive goal of evidence-based design. Training clinicians in the design process and creation of hospitals would empower providers to contribute to the improvement and development of hospitals and their designs. Hospital design plays a crucial role in the ability of hospitals and clinicians to provide quality services to patients. With the tools of health services research and collaborations across disciplines, we hope a new framework and platform for hospital design research will advance the ways in which hospitals are developed, constructed, and ultimately, impact the people who inhabit them. Dr. Ibrahim serves as a paid consultant in the architecture firm HOK; the firm had no role in the manuscript. The remaining author declares no conflict of interest.