There is increasing enthusiasm for, and recognition of, the benefits that artificial intelligence (AI) can provide to society. The emphasis has been on the positive, but AI and deep learning can be used for negative purposes. Modular Neural Networks (MNN) are capable of independent learning and have been targeted at evolutionary, complex financial systems. If the goal of an MNN were to be defined as system penetration, there is no reason why an algorithm could not run in the background. There are a resource requirements, but organised crime groups, technology companies, nation states and individuals with a curious bent are all capable of such. Ordered society and security requires a degree of certainty that systems on which society depends will remain recognisable, dependable and resilient. Under current conditions, security is difficult enough. It is suggested that limitations may be required before release of certain AI systems, in the knowledge of their potential for detriment to society. An AI system capable of independent learning, permits undefined emergent behaviours. That the results of any emergent properties may be benign or malign is irrelevant. Scientific history is littered with developments whose uses were redirected away from the benign.Such concern could be interpreted as fear of the unknown, standing in the way of technological advances. Unless society wishes to become machine-driven, the power and control of systems should be defined and limited by society, not accidently sprung on humanity or based on a ruthless logic that may drive a system to an unacceptable conclusion. Currently there are sophisticated botnet forming methods ensuring botnet persistence. If combined with the concepts of AI, there is a possibility that botnets could exist in perpetuity, with no one able to predict emergence, and no time limits on evolution. Whither cyber defence in the face of the unstoppable, increasingly intelligent, goal directed systems?