Economic interactions – especially online – generate data that stimulates strategic artificial intelligence (AI), machine learning (ML) and deep learning (DL) use: by businesses for predictive analytics, process optimisation and market power; by consumers for search, decision-making and (again) market power; and by governments for detecting criminal or harmful behaviour, gathering evidence and regulation. Not all uses increase competition and efficiency. One recent concern is algorithmic collusion (AC); whether revenue management algorithms can signal and implement tacitly collusive behaviour. This paper summarises theoretical and empirical evidence, considers how specific business machine learning methods may affect AC and whether consumer and regulator algorithms can detect or solve the resulting problems. It examines the links between Internet regulation and competition/consumer protection policy. Much early ML literature concentrated programmes ‘learning’ about their environments. A simple version would predict tomorrow’s prices from historical data to set profit-maximising prices. This could involve estimating prices or costs (assuming their behavioural rules), trying to identify behavioural rules or trying to influence rivals’ learning. Here, AI includes anything from fixed rules mapping data to prices to deep neural nets, ML is AI machines that program themselves to optimise specific objectives (thus having at least one ‘hidden layer’) and DL is ML with many hidden layers. Increased depth and thus computation makes behaviour an intricate convolution of data and programme history that is less visible those who programmed the system, let alone explainable to ‘outsiders.’ If many firms use ML, learning seeks a ‘moving target’ and may fail to converge or lead to unintended consequences. Conventional AC models use simple algorithms to demonstrate behaviour consistent with collusion in models of repeated interaction. It is not inevitable or classically collusive especially without good communications. More sophisticated approaches, however, suggest that populations of even simple AI agents can learn to adopt sophisticated reward/punishment strategies that sustain profitable outcomes. This paper considers further variations taking into account e.g. the influence of size and targeting of price deviations, finite-memory or dominance elimination strategies and the difference product characteristics (durability, quality uncertainty, purchase frequency) and search services can make. Simulation results illustrate a range of classic market inefficiencies (overshoot, convergence to prices between monopoly and oligopoly, cyclic behaviour and endogenous market-sharing collusion). From the regulatory perspective, it is not clear what is illegal and what could or should be banned. This raises questions of detecting AC (e.g. by DL) and limiting its spread or consequences. We consider: i) restrictions on information available to firms; ii) constraints on the speed or size of pricing changes; iii) Coding standards e.g. to incorporate regulatory compliance in ML objectives; and iv) algorithmic detection of specified anticompetitive behaviours. For iii), we show that populations using (e.g.) likelihood-ratio policy gradient reinforcement learning are more likely to converge to collusive behaviours (tit-for-tat) when they take other firms’ learning into account and more able to shape others’ learning depending on the prevalence of AI and the topology of information.