In this special issue of CCPE on specifications-based, applied distributed computing from the Open Grid Forum (OGF), we document an important step in the evolution of this field. The concept of distributed computing has been around for decades, but the systems built have changed over the years, as the available tools, deployed infrastructure, and accepted approaches have changed. In the past years, there have been distributed environments, such as DCE, CORBA, and DCOM; and also language-based tools, such as RPC, Java RMI, and many versions of FORTRAN, C and C++ to support parallel and distributed computing. Entire libraries could be devoted to cataloging the works done in this topic. All of these systems made a contribution to our understanding in one way or another by addressing fundamental aspects of distributed computing, e.g. discovery, reliability, security, dynamic openness, virtualization, and manageable performance. From a practical viewpoint, however, a key stumbling block for many distributed computing systems has been the scale of deployment and use, or rather the lack thereof. In the last decade, though, we have seen the development of economically viable, global-scale distributed systems. Clearly, the Internet provides basic connectivity among computing systems, and the World Wide Web serves content that has become the de facto ‘font of all human knowledge’. With these developments firmly entrenched in human society, the next goal is to develop a distributed computing capability that captures a sufficient ‘mindshare’—not only as a good technical idea, but also as something that is economically self-sustaining in the marketplace. We note that the goal here is not just to make economic progress, but to transform the availability of all data and computing, in much the same way that the economics of cluster computing has transformed the availability of supercomputing. This has always been the goal of the OGF. With its roots in the ‘big science’ computing community that wanted to shared massive data and large parallel machines, the traditional Grid concept focused on the managed sharing of resources across administrative boundaries. Based on the traditional scientific computing practices of the day, this typically meant staging binaries and data, and submitting jobs to remote batch schedulers. To streamline such operations, tools such as GridFTP were developed, along with metadata catalogs for machines and networks, monitoring systems, and certificate-based security, whereby role-based authorization could be done according to one's Grid identity within a virtual organization. But this general model of operations is not necessarily the most appropriate for all possible application domains. The industrial arena has seen the development of utility computing, or internet computing, or as it has been most recently called, cloud computing. The cloud concept involves the virtualization of resources that can actually be done at different levels in the system stack. Roughly speaking, virtualization can be done at the infrastructure level (allocating a bag of linux nodes), the platform level (allocating language-specific hosting environments), or the application services level (hosting sets of pre-defined services in a data center). Systems at each of these levels are being called clouds because of the resources or services that are being virtualized at some nebulous place ‘in the clouds’. This is indeed a critical development, but it is not the end of the story. Current cloud usage is primarily between a client and single cloud provider, whose clouds are typically closed, proprietary systems. This means that without some notion of cloud interoperability, these systems become ‘walled gardens’ where vendors lock-in their clients. While these systems can support a wide segment of the web hosting and e-commerce marketplace, there is a non-trivial segment of applications that are inherently distributed and must cross administrative domains. Such applications include environmental management systems, disaster mitigation systems, air traffic control systems, and weather satellite data systems, just to name a few. These will require existing Grid capabilities such as Grid identities, the delegation of trust, virtual organizations, etc. There is also a fundamental trade-off between abstraction and control. Abstraction enables simplicity and ease of use, which have greatly facilitated the adoption of cloud technology. Many application domains, however, require known performance behavior from the infrastructure. This can only be accomplished if the user has some degree of insight into the infrastructure and can control it sufficiently to achieve the desired results, such as managing any affinities between data and computation. It is in this context of a rapidly evolving technical landscape that we present these papers. While these papers are ostensibly ‘Grid’ papers, they are highly relevant to clouds as well, as they address fundamental topics for distributed systems. The paper by Dabrowski surveys work done in reliability for large-scale, heterogeneous, dynamic environments, with an encyclopedic list of citations. The paper by Riedel et al. reports on the work of the Grid Interoperation Now community group to achieve interoperation among the world's major e-Science infrastructures. For this group, interoperation is a near-term goal to be achieved by whatever practical means necessary that will facilitate the long-term goal of interoperability through common open standards. The paper by Merrill and Grimshaw refines OASIS and W3C standards to produce two new profiles for secure addressing and communication that enable the actual brokering of interorganizational trust. The paper by Grimshaw et al. demonstrates how naming and binding techniques are used in WS-Naming to achieve naming transparency, i.e. abstracting a name from a physical entity, which is necessary for transparent fail-over, replication, and migration to improve reliability. The paper by Gutiérrez et al. reports on standard services for accessing metadata in the Resource Description Framework Schema to support such services as resource discovery, selection, brokering, monitoring, and accounting. The paper by Smith et al. examines two methods for accessing computational resources—the HPC-Basic Profile that supports batch job scheduling for scientific or technical computing and the Simple API for Grid Applications that provides a general programmatic interface with support for job submission and management. The paper by Brown et al. describes standard methods for representing network measurements and topology with the goal of managing network configuration and performance. This is critical for properly supporting application domains such as visualization, multi-domain alerting systems, end-to-end performance diagnostics, etc. and the overall concept of service-oriented networks. Finally, the paper by Jha et al. develops the notion of affinities to explain how a system's internal properties support different usage patterns. They then argue that clouds can be considered a higher-level abstraction of Grids. Clearly, we expect that the future will see an integration of Grids and clouds that draws on the technical capabilities of both—where most appropriate—for the spectrum of applications and their requirements. We conclude this editorial as we started it. We are documenting but one step in the evolution to achieve standards-based, applied distributed computing that is economically viable in the marketplace. There is much more work that remains to be done.