The research and advice in Evaluating Online Teaching (2015) covered the range of then-possible online instructional scenarios — what we might nowadays call ‘traditional’ online spaces such as learning management system (LMS) shells that afforded a highly mediated and structured range of possible interactions among instructors, learners and the tool sets within those spaces. Largely asynchronous, online teaching left a clear trail of observable phenomena. Announcement posts, discussion threads and comments on student work were all captured in the LMS space. At the time of publication, we worried that such a cornucopia of observable data points would lead to ‘analysis paralysis’, and we advised observers of online teaching to limit their observations to one unit or online session, much as an on-ground observer might observe only one or two class periods of live class time. We now find ourselves in an instructional world where the bounded environment of the LMS seems almost simple. Especially as a result of the emergency remote instruction during the COVID-19 pandemic, the possible permutations and definitions of ‘online teaching’ have exploded. For those of us tasked with observing, evaluating, crediting and critiquing the teaching that happens at our institutions, we can no longer assume that looking in one ‘place’ — whether that is a classroom, an LMS shell or a Zoom recording of a live remote session — will afford us a representative sample of the teaching practices and behaviours that instructors exhibit. This best-practice paper outlines what has changed in technology-supported and technology-mediated teaching and offers ways to observe and assess online teaching that are consistent, equitable and fair.