Data security implies the avoidance of theft, loss or unauthorised disclosure of data. This may have financial implications either because the lost can be converted into money by the person who takes the (as in computer fraud) or because the offer some competitive advantage to a competitor (as in industrial espionage). Data security also may have legal implications under the terms of the Data Protection Act — though that is the subject of another article. Data integrity implies the ensuring of remaining accurate, up-todate and valid — loss of integrity occurs because are either modified in some unauthorised way or are not modified and updated when the situation they represent changes (so that, for example, a stock control fails to record issues or receipts of stock and authorised users obtain an inaccurate stock report). For the purposes of this article, I shall use the term security to cover both of these aspects. The two terms can also be used with the prefix system rather than data to indicate loss of, or unavailability of, the entire — either because of a technical fault or accident. Most organisations take trouble to ensure that a breakdown causes minimum disruption to their IT processing — generally through having effective maintenance and repair procedures but this may also extend to having backup systems or failsafe processing agreements with other agencies. For particularly sensitive applications, faulttolerant systems (which normally have two processors and which mirror all files to a second disk) are available to ensure continuity of service. Such procedures are commonly used on mainframe and minicomputers but are often too expensive to implement on micros. However, the increasing use of networks based around a powerful server which contains large amounts of software and means that such techniques must be extended to this of operation. One common source (too common, unfortunately) of unavailability is program error. All programmes, both in-house and commercially supplied, contain some bugs — some are severe enough to bring the to a grinding halt. Only proper design and testing can eliminate this source of problems. It is to be hoped that software suppliers refrain from releasing early versions of software onto the market and expecting users to carry out the final testing. However, in the meantime it is important that users are educated to be aware of such problems and to use a routine which minimises loss in such cases, e.g. by saving word processing files at regular intervals. Similarly it is program and design that avoids many of the problems of unauthorised amendment of data. Many programs which involve valuable have authorisation processes which limit access of individuals to parts of the program or parts of the data. Thus it may be possible to assign read rights which enable a particular user to access on screen but not to make any changes. Other higher level users may be assigned read/write status allowing them to make amendments to the data. This status is normally provided by issuing specific identity codes and/or passwords to different classes of user. The protection is thus in two parts — the administrator assigns rights to different users and the software validates those rights by requesting the identification code or password before certain activities are allowed. Software controls also include the use of audit trails where the maintains some form of log of which users did what, and when. This may not directly prevent unauthorised access but it may be a means of tracking down culprits and in acting as a deterrent in the face of possible detection. Another software-related problem is that of conflicting programs. This occurs where a user has his or her favourite TSR (terminate and stay ready) program loaded into memory and then attempts to use another package which makes a bid for the same area of memory. Usually this simply results in the clash preventing the loading of the second package but it can result in memory corruption. If that part of memory is then saved to disk, it may overwrite a previous version of the datafile which was uncorrupted and the genuine, valid are lost. When attempting to use (especially more than one) TSR programs, the user should test their compatibility with sample that can be lost without worry. There are numerous stories about grand theft of money, or valuable data, from computing systems. When one tries to trace the source of such stories, it is often difficult to get back to a reliable source. This may be because the story is apocryphal based on rumour; or because the organisation that suffered the loss does not want publicity to be