We are living in the epoch referred to as the ‘4th industrial revolution'. The 4th industrial Revolution (4IR) is a development characterized by a fusion of technologies that blur the digital, physical, and biological spheres (e.g., cyberspace, virtual and augmented reality, body-machine interface and robotics). Certain is the guaranteed ubiquitous adoption of these technologies, and futurism. Where the former is a reference to the increasing use and normalization of such technologies in everyday life, government service provision and industry. The latter is a reference to the philosophical/science fiction discussions that are emerging as a result of these changes (e.g. debates around the ‘singularity’, transhumanism, and posthumanism – often presented in utopian/dystopia terms). As such, the definition of digital ethics can be expanded and expressed in terms of the impacts of new digital technologies, through analysis of potential opportunities and risks in contemporary and future contexts. Many are working on forward‑looking policy frameworks and governance protocols, with broad multistakeholder engagement and buy‑in, to accelerate the adoption of emerging technologies in the global public interest, such as artificial intelligence (AI) and machine learning (ML) blockchain, 5G, data analytics, quantum computing, autonomous vehicles, synthetic biology, the internet of things (IoT), and killer robots or autonomous weapons systems (AWS). We have gained insight into the unequal distribution of the positive and negative impacts of AI on human rights throughout society, and have begun to explore the power of the human rights framework to address these disparate impacts. Although internationally recognized laws and standards on human rights provide a common standard of achievement for all people in all countries, more work is needed to understand how they can be best applied in the context of disruptive technology. AI systems raise myriad questions for society and democracy, only some of which are covered or addressed by existing laws. In order to fill these perceived gaps, a vocal group of governments, industry players, academics, and civil society actors have been promoting principles or frameworks for ethical AI. COVID-19 accelerated the use of AI in all countries and all fields. The pandemic accelerated the transition to a society that is increasingly based on the use of AI. This also increased the threats new risks related to human rights in the context of AI deployment. The human rights implications of governments' aggressive measures targeting the spread of COVID-19-related misinforation is also discussed. The question of whether corporations can act ethically is particularly relevant for Big Tech. Many of these firms are oligopolies that individuals and governments alike depend on completely, though they have little to no capacity to independently remedy issues when they arise, as Project Maven showed. Artificial intelligence and automated decision-making tools are increasing in power and centrality, and technology companies retain large troves of private data that it sells. These companies are at the forefront of technological innovation and may be caught up with the factual question of what can be done rather than the normative question of whether it should be done. All these issues arise in a field where there is little to no government regulation or intervention. The threats AI poses to society are so new, that the legal system is struggling to impose sufficient values and restrictions. Thus, a coherent approach to addressing AI ethics, values and consequences is, indeed, urgently needed. In May 2019, 42 countries adopted the Organization for Economic Co-operation and Development (OECD) AI Principles, a legal recommendation that includes five principles and five recommendations related to the use of AI. To ensure the successful implementation of the Principles, the OECD launched the AI Policy Observatory in February 2020. The Observatory publishes practical guidance about how to implement the AI Principles, and supports a live database of AI policies and initiatives globally. It also compiles metrics and measurement of global AI development and uses its convening power to bring together the private sector, governments, academia, and civil society. The AI ethics and governance initiatives discussed are cause for optimism that the global community will use all available models and brainpower for analysis and ultimately global governance of AI.