The current regulatory environment governing computer information systems is somewhat confused because of the multiplicity of the means which can be employed in regulating a wide variety of dissimilar services. The Federal Communications Commission, which regulates broadcasters and common carriers providing electronic data, considers computer information systems to be "enhanced" services, and, therefore, computer information systems are not regulated by the F.C.C. However, some specific aspects of computer information systems are governed by existing case law and statutes.
Let us start with a hypothetical situation. The Data Playground is a large, full service bulletin board system. In the BBS's message system, one of the fora, called the Sewer, is set aside for the users as a place to blow off some steam, and express their anger at whatever they feel like complaining about. Samantha Sysop, the bulletin board operator, feels such a forum is necessary. She feels that without it, frustrated users will leave unpleasant messages in the other fora which are meant for rational discussions of serious topics. By providing the Sewer, users who get upset with other users or with life in general can "take their problem to the Sewer." Because she is unsure of any liability for posts in the Sewer which get too heated, she posts a disclaimer, which can be seen the first time a user posts in or reads the Sewer, which states that the SYSOP disclaims all liability for anything that is said in the Sewer. Samantha Sysop reads the posts left in the Sewer, and once in a while posts a mess age there herself. One day a user, Sam Slammer, leaves the following message in the Sewer:
"From: Sam Slammer
I am sick and tired of logging onto this damned bulletin board and seeing that damn user Dora Defamed here. She is always here. However, at least if she is here it means that she is not still at home beating her young daughter. In fact, her daughter is too good looking to be stuck with a mother like Dora. She should be stuck with someone like me, after all, I really like young girls, and having sex with her would be a real catch. (If anyone would like to see the films of the last little girl I had sex with, leave me mail) Anyway, Dora: it is a wonder that kid isn't brain damaged, seeing as you are so badly warped. I would really like to do society a favor and kill you before you get the chance to beat any more children. In fact, if anyone is near the computer where Dora is connected to this BBS from, I urge you to go over to her and kill her. Do us all a favor."
This hypothetical post raises a number of issues. In one post there is potentially defamatory speech, speech advocating lawless action, fighting words, and an admission and solicitation of child pornography.
Defamation can occur on a computer information system in a number of forms: posts on a bulletin board system, like the one in the Sam Slammer hypothetical can be defamatory, as can electronic periodicals; file servers and databases can distribute defamatory material; E-mail can contain defamatory statements. Defamation can even be distributed in the form of a scanned photograph. But what is defamation, and what risks and obligations does it present to a system operator?
Defamation occurs in two forms - libel and slander. The difference between these two forms of defamation is often not apparent, based on a common sense approach, rather it is solely a matter of form and "no respectable authority has ever attempted to justify the distinction on principle." With the rise of new forms of technology which confuse the distinction between libel and slander, many courts have advocated the elimination of the distinction. Speech on a computer information system has more of the characteristics of libel than slander. Most courts have argued, based on libel cases, that messages appearing on computer information systems are libel and not slander; often judges used the generic term "defamation."
Slander is publication in a transitory form - speech, for example, is slander. Libel, on the other hand, is embodied in a physical, longer lasting form, or "by any other form of communication that has the potentially harmful qualities characteristic of written or printed words." Written or printed words are considered more harmful than spoken words because they are deemed more premeditated and deliberate. For example, Sam Slammer had to sit down at a keyboard and compose his post; it is not a matter of a comment carelessly made in a fit of anger. Printed words also last longer, because they are put in a form in which they can serve to remind auditors of the defamation, while the spoken word is gone once uttered. Had Sam Slammer accused Dora Defamed of child abuse in person, the statement would be fleeting; on the BBS it is stored for viewing by any user who decides to read what posts have been left in the Sewer. For days, weeks, or months people can read Sam's statement unless Samantha Sysop removes it. Any user can save a copy of the post on his or her own computer, and can distribute it, verbatim, to anyone else, with Sam's name right at the top. Text on a computer screen shares more traits with libel than with slander. Computer text appears as printed words, and it is often more premeditated than spoken words. Computer text can be called up off of a disk as many times as is needed. The message can even be printed out, and the text can be more widely circulated than the same words wh en they are spoken. In its barest form, libel is the publication of a false, defamatory and unprivileged statement to a third person. "Defamatory" communication is defined as communication that tends to harm the reputation of another so "as to lower him [or her] in the estimation of the community or to deter third persons from associating or dealing with him [or her]." Actual harm to reputation is not necessary for a statement to be defamatory, and the statement need not actually result in a third person's refusal to deal with the object of the statement; rather the words used must merely be likely to have such an effect. For this reason, if the person defamed already looks so bad in the eyes of the community that his or her reputation could not be made worse, or if the statements are made by someone who has no credibility, there will not be a strong case for defamation. "Community" does not refer to the entire community, but rather to a "substantial and respectable minority" of the community. Even more specifically, the community is not necessarily seen as the community at large, but rather as the "relevant" community. This means, for example, that one could post a defamatory message on a bulletin board system defaming another user and be subject to a libel suit, even though only other BBS users see the post.
In the hypothetical, we don't know whether Sam's accusations of child beating are true. If they are, Sam would have a defense against a charge of libel. The comment is being "published" to any other BBS user who reads the message Sam has left publicly, and as already discussed, the computer message has the same harmful qualities as a message written and distributed on paper. In fact, Sam's comments are potentially reaching a larger audience than Sam could have reached by simply posting a notice on a bulletin board in the local computer center. The remark about child abuse has the potential for lowering people's estimation of Dora, and could easily encourage people to avoid associating with her. Even if people do not avoid Dora because of the remark, in a defamation suit it is sufficient that the statements have the potential to have that effect, and here they clearly do.
The community at issue here is not the world at large, but rather a substantial and respectable minority of the "relevant" community. Bulletin board systems can give rise to a close knit group of users. Here, she is being attacked in a public forum in front of the whole community of users. This raises another issue: Can a person sue for defamation that occurred to a fictitious name or a persona that appears on a computer? If "Dora Defamed" was not the BBS user's real name, could the real user sue Sam Slammer for defaming the user's "Dora" persona on the BBS? In a bulletin board community, unless users know each other in real life away from the computer, the only impression one user gets of another is from how he or she appears on the computer screen. The user in real life may not even be the same sex as the person he or she portrays on the bulletin board system. On the BBS, people only know and associate with Dora; not the real person behind the name. When Dora is defamed, in essence, so is the person behind the computer representation of Dora. The user is defamed in the eyes of the users behind all of the other BBS personalities that read Sam's post. It should not matter if Dora Defamed is not the user's real identity-a defamation action should still be allowed. The last issue is whether Dora is being defamed in front of at least a "substantial and respectable" minority of the relevant community. This hinges on who reads the Sewer forum. If the Sewer is widely read, a defamation suit will be more likely to succeed than if the Sewer is largely ignored.
There is one case, from Australia, which held that speech over a computer "bulletin board" was actionable in a libel suit. This case was a default judgment resulting from messages sent over the DIALx science anthropology computer bulletin board, a discussion group available world wide and subscribed to by some 23,000 anthropology students and academics. The court found that a number of the statements made were capable of a defamatory meaning, the statements were published throughout academic circ les around the world, the statements were likely to be further repeated, gaining in impact in the process, and that the statements would have a detrimental impact on the plaintiff's standing in the international academic circles in which his reputation wa s based. Due to his reputational and psychological injury, the court found he was deserving of an award of AU$40,000.
In the United States, another case is currently being pressed claiming defamation via computer information system. It involves remarks made over the Prodigy service on a financial discussion group. In this case, Peter DeNigris is being sued by MEDphone over disparaging remarks he made regarding MEDphone and its products in approximately two dozen notes posted over the course of three months .
Because defamation involves speech, defamation raises serious First Amendment concerns. Just because speech is defamatory, does not mean that it is left unprotected. Analysis is based on the party or parties privy to the defamation. In our hypothetical , the relevant parties are Sam and Dora. Constitutional protection was first found for some types of defamation in *New York Times v. Sullivan*. This case involved an advertisement taken out in a newspaper expressing grievances with the treatment of b lacks in Alabama. An elected city commissioner sued, claiming that the statements made in the advertisement defamed him and that the advertisement contained some inaccuracies. Justice Brennan argued that the case should be considered "against the background of a profound national commitment to the principle that debate on public issues should be uninhibited, robust, and wide-open, and that it may well include vehement, caustic, and sometimes unpleasantly sharp attacks on government and public officials." The court held that, because one of the main purposes of the First Amendment was to preserve debate and critical analysis of the affairs of elected officials, any censorship of that speech would be detrimental to society. Because of this, the court said libel laws should be relaxed where the speech pertains to the affairs of elected officials. Likewise, due to the importance of being able to examine the worthiness of public officials, the court felt that speech critical of officials should also be less open to attack on grounds of falsity. False speech that is made known can be investigated, but true speech that the critic worries may be false and may result in a libel suit, will remain undisseminated. Because of the importance of monitoring elected officials, the court held that allowing speech that would aid in the monitoring of elected officials' conduct was more important than protecting officials from potential harm resulting from defamatory speech. A balance between o pen debate and freedom from defamation was struck by establishing an "actual malice" standard of liability for the publisher. "Actual malice" is a term of art with a specific meaning in the publishing context. As the court stated:
"The constitutional guarantees require, we think, a federal rule that prohibits a public official from recovering damages for a defamatory falsehood relating to his [or her] official conduct unless he [or she] proves that the statement was made with "actual malice" -- that is, with knowledge that it was false or with reckless disregard of whether it was false or not."
This standard applies to electronic publishing as clearly as it applies to print or speech. SYSOPs and users are freed from liability for defamation carried on computer information systems, as it applies to public officials, so long as the material is no t allowed to remain when the SYSOP or user knows of its falsity or has reckless disregard for its truth. Dora, as far as we know, is not a public official. If Dora were a persona on the bulletin board system, and not the user's actual name, and if there is no way for the average user to associate the persona with the real person, then even if "Dora" were defamed and the real user *was* a public official, it would be questionable as to whether the public official privilege would apply. In this situation, the rationale behind the privilege would not be relevant to the actual facts. Statements about Dora do not reflect on the actual user's abilities to perform his or her official job. If, however, the public official can be linked to the Dora persona, then the basis for privileging statements about public officials does apply to the situation, and Sam Slammer's statement may be privileged, presuming no actual malice was intended.
The *New York Times* standard was expanded in two important cases, *Curtis Publishing Co. v. Butts*, and its companion case, *Associated Press v. Walker*. Both cases involved defamation of people who did not fit under the "public official" heading , but who were "public figures." As discussed in the concurrence, some people, even though they are not part of the government, are nonetheless sufficiently influential to affect matters of important public concern. The Court subsequently has defined public figures as "[t]hose who, by reason of the notoriety of their achievements or the vigor and success with which they seek the public's attention, are properly classed as public figures ... ." Because these people have influence in our governance , just as public officials do, the same "actual malice" standard should apply to such public figures. Here, as in the case of public officials, we don't really know who Dora Defamed is. If she is a public figure, Sam's child abuse claim may be privileged; if she is not, he may be liable.
Another major case defining the constitutional protection of defamation is *Gertz v. Robert Welch, Inc*. In *Gertz*, a magazine published an article accusing a lawyer of being a "Communist-fronter" and a "Marxist." The article accused the plaintiff of plotting against the police. The plaintiff was a lawyer who played a role in the trial of a police officer who was charged with shooting a boy. The lawyer sued for defamation. The publisher's defense was based on another exception to defamation law that the court had carved out in *Rosenbloom v. Metromedia, Inc*. *Rosenbloom* extended the *New York Times* standard to include not just public officials and public figures, but also private figures who were actively involved in matters of public concern. The *Gertz* court held that this expansion went too far, and the court overruled *Rosenbloom*. The court in *Gertz* acknowledged that the press should not be held strictly liable for false factual assertions where matters of public interest were concerned. Strict liability would serve to chill the publisher's speech by leading to self censorship where facts are in doubt. This First Amendment interest was balanced against the individual's interest in being compensated for defamatory falsehood. The court reasoned that private individuals were deserving of more protection than public officials and public figures because private persons do not have the same access to channels of communication, and they have not voluntarily exposed themselves to the public spotlight. The court held that "so long as they do not impose liability without fault, the States may define for themselves the appropriate standard of liability for a publisher or broadcaster of defamatory falsehood injurious to a private individual." Courts have not made it very difficult for private people to sue for defamation where there is no matter of public concern at issue; in one of the more famous defamation cases, *Dun & Bradstreet, Inc. v. Greenmoss Builders, Inc.*, Dun & Bradstreet was held liable for a credit report made from inaccurate records contained in a database. The court argued that statements on matters of no public concern, especially when solely motivated by profit, did not deserve sufficient First Amendment protection to outweigh the individual's interest in suing for defamation.
In our hypothetical, we must look to the subject of Sam Slammer's defamatory comment to see if it is a matter of public concern. Sam is accusing Dora of "beating her kid." While child abuse may be a matter of public concern, whether Dora is such an abuser is not likely a matter of public concern. Just as people's inabilities to pay their debts can be a matter of public concern, as was found in the *Dun & Bradstreet* case, the ability of one particular company to pay its debts is not necessarily a matter of public concern. Child abuse is not the issue in this hypothetical; Dora Defamed's potential child abuse is the issue.
The press has been found to have other privileges as a result of the kind of news the press is reporting. One such privilege, is for fair report, or "neutral reportage," (which is not an issue in our hypothetical). This isolates a reporter from defamatory statements that he or she is reporting. The reasoning behind this is that the fact that some statements were made is a matter of public interest, especially around sensitive issues, and therefore the public interest is best served by allowing t he press to inform people of these statements without the risk of liability. Neutral reporting is privileged, but if the reporter is found not to have lived up to the "actual malice" standard (knowing or careless disregard for the truth), his or her report will not be considered neutral and therefore the fair report privilege will not apply.
Statements of opinion are also privileged. Protection of opinion is, of necessity, not absolute otherwise "a writer could escape liability ... simply by using, explicitly or implicitly, the words `I think.'" Sam Slammer cannot defend himself by saying, "Well, I *think* Dora beats her daughter." The court in *Cianci v. New Times Publishing Co*. succinctly laid out the limits of the opinion privilege:
"(1) that a pejorative statement of opinion concerning a public figure generally is constitutionally protected ... no matter how vigorously expressed; (2) that this principle applies even when the statement includes a term which could refer to criminal conduct if the term could not reasonably be so understood in context; but (3) that the principle does not cover a charge which could reasonably be understood as imputing specific criminal or other wrongful acts."
In the hypothetical, Sam made an outright accusation that Dora Defamed committed a criminal act. Even if he had stated that he believes that she beats her daughter, unless the statement is clearly one interpretable as an opinion, he still is likely to be held liable for his remark.
In sum, what this means for computer information systems, whether speech on a bulletin board, text in an electronic journal, or in any of the other forms of electronic publication, is that liability may result if the message is libelous. It may not result in liability if the defamation concerns public figures, public officials, or matters of public interest. Communications that defame a user may not constitute defamation to the community at large, but the statements may still give rise to liability if i t lowers the opinion of the user in the eyes of the rest of the bulletin board users.
The First Amendment states that "Congress shall make no law ... abridging the freedom of speech, or of the press." The First Amendment is one of the most important guarantees in the Bill of Rights, because speech is essential for securing other right s. While the right of free speech has been challenged by the emergence of each new medium of communication, the right of free speech still applies to the new forms of communication, although it is, at times, more restrictive. An example of such a restriction is the regulation of radio and television by the Federal Communications Commission. The rationale for F.C.C. governance is based on spectrum scarcity. Currently, this is not a real issue with computer information systems, but with the rise of packet radio and wireless networks which transmit computer data through the airwaves, the F.C.C. may choose to regulate some aspects of computer information systems. Some people advocate that, with changes in technology, distinctions between different forms of media, such as between electronic and print media, should be eliminated; instead, one all-encompassing standard should be used. No matter what the standard employed, some forms of speech are currently not allowed on the local street corner or on the local computer screen. In our Sam Slammer hypothetical, questions arise as to whether his message contains some of this speech which is inappropriate for public consumption.
One type of speech not permitted is advocacy of lawless action, as laid out in *Brandenburg v. Ohio*. The court in *Brandenburg* held that the guarantees of free speech and free press do not forbid a state from proscribing advocacy of the use of force or of law violation "where such advocacy is directed to inciting or producing imminent lawless action and is likely to incite or produce such action." Sam threatened to kill Dora, and he urged others to kill her as well. An important distinction i s made between mere advocacy and incitement to imminent lawless action -the first is protected speech, while the second is not. This distinction is quite important, yet can be blurry, in a computer context. On a bulletin board system, for instance, messages may be read by a user weeks after they have been posted. It is hard to imagine such "stale" messages as advocating *imminent* lawless action. In our hypothetical, Sam encourages anyone near the computer Dora is using to go kill her. A user who reads the post hours later, may no longer have the opportunity to take the requested action, even if so inclined. Dora may be, for example, at home (beating her daughter?), and no longer at that computer. The action was advocated, but other users will not be incited to carry out the action because the act would not be possible at the time. An information system with a chat feature, which allows users to talk nearly instantaneously to one another, is, however, altogether different. With such a "chat" feature, it would be possible to make a *Brandenburg* incitement threat.
Another kind of speech not given First Amendment protection is "fighting words." Fighting words are "those which by their very utterance inflict injury or tend to incite an immediate breach of the peace. In *Chaplinsky v. State of New Hampshire*, the court held that fighting words (as well as lewd, obscene, profane, and libelous language) "are no essential part of any exposition of ideas, and are of such slight social value as a step to truth that any benefit that may be derived from them is clearly outweighed by the social interest in order and morality." The court further defined fighting words as words that have a direct tendency to provoke acts of violence from the individual to whom the remarks are addressed, as judged not by what the addressee believes, but rather by what a common person of average intelligence would be provoked into fighting. A message posted on a bulletin board or sent by E-mail could contain fighting words. Dora is being accused of being a child abuser, and in the message someone offers to sexually abuse her young daughter. There is no imminence requirement in *Chaplinsky* as there is in *Brandenburg*. Fighting words can be considered delivered to the addressee when the message is read. Dora will become enraged when she reads Sam's message. When Sam left the message has little bearing on when Dora will be ready to fight. While it is hard to fight with the message sender when he or she may not be nearby or even in the same country, that does not preclude some forms of "fighting." Of course, if the sender of the fighting words is nearby, actual fighting could occur. If the sender of the message is on a computer network, an angered recipient could "fight" by trying to tamper with or otherwise damage the sender's computer account. If Sam had written his post about Samantha Sysop instead of Dora, he could find himself unable to access the bulletin board system, or he may find that his copy of his master's thesis which he was word processing is suddenly missing from his computer account.
A statutory example of the fighting words doctrine is the prohibition against sending threats to kidnap, injure or extort anything from another person. For example, Section 875 (b) of the U.S. Code reads:
"(b) Whoever, with intent to extort from any person, firm association, or corporation, any money or other thing of value, transmits in interstate or foreign commerce any communication containing any threat to kidnap any person or any threat to injure the person of another, shall be fined not more than $5,000 or imprisoned not more than twenty years, or both."
This section was recently applied to convict a college freshman who sent an E-mail message to President Clinton threatening that "One of these days, I'm going to come to Washington and blow your little head off. I have a bunch of guns, I can do it." The note also threatened Hillary Rodham Clinton and the Clintons' daughter Chelsea. The statutory section used to convict the freshman in this case does not make any distinctions between the means of transportation for the message. As a result, it can be easily applied to users of electronic mail.
It is possible that a more adventuresome prosecutor could employ another statute in the case of threats made against the President. Section 871, which covers specifically threats against the President, Vice-President, and certain other officers of the United States, states that:
"(a) Whoever knowingly and wilfully deposits for conveyance in the mail or for delivery from any post office or by any letter carrier any letter, paper, writing, print, missive, or document containing any threat to take the life of, to kidnap, or to inflict bodily harm upon the President of the United States . . . shall be fined not more than $1,000 or imprisoned not more than five years, or both."
If a computer network can be considered "any letter carrier" and an E-mail message "any letter, writing, print, missive, or document," then this statute may be applicable to E-mailed threats as well.
Other areas of content are regulated on computer information systems. One is child pornography. *New York v. Ferber* held that states can prohibit the depiction of minors engaged in sexual conduct. The *Ferber* court gave five reasons for its holding. First, the legislative judgment, that using children as subjects of pornography could be harmful to their physical and psychological well-being, easily passes muster under the First Amendment. Second, application of the *Miller* standard for obscenity (discussed infra) is not a satisfactory solution to the problem of child pornography. Third, the financial gain involved in selling and advertising child pornography provides incentive to produce such material - and such activity is prohibited throughout the United States. Fourth, the value of permitting minors to perform/appear in lewd exhibitions is negligible at best. Finally, classifying child pornography as a form of expression outside the protection of the First Amendment is not incompatible with earlier court decisions. The court said, "[T]he distribution of photographs and films depicting sexual activity by juveniles is intrinsically related to the sexual abuse of children ..." and is therefore within the state's interest and power to prohibit. The Federal government has explicitly addressed child pornography as it pertains to computer communication. Section 2252 of Title 18 of the U.S. Code forbids knowing foreign or interstate transportation or reception by any means including, for example, visual depictions of minors engaged in sexually explicit conduct which have been converted into a computer-readable form. Recent international investigations into illegal child-pornography distribution via computer network have resulted in search warrants being issued to U.S. Customs agents in at least 15 states.
Pictures are easily converted into a computer-readable form. Once in such a form, they can be distributed, interstate or internationally, over a computer information system. Pictures are put into a computer by a process called "scanning" or "digitizing. " Scanning is accomplished by dividing a picture up into little tiny elements called pixels. The equivalent can be seen by looking very closely at a television screen or at a photograph printed in a newspaper. The computer examines each of these dots, or pixels, and measures its brightness; the computer does this with every pixel. The picture is then represented by a series of numbers that correspond to the brightness and location of each pixel. These numbers can be stored as a file for access on a bulletin board system or file server or can be transferred over a network.
Computers do not differentiate between "innocuous" pictures and pictures that are pornographic. A piece of child pornography can be scanned and distributed by file server, bulletin board, or through E-mail just like any other computer file. If Sam Slammer had received a response from someone interested in seeing the pictures of the last time he had sex with a child, the pictures could easily be scanned into a computer-readable form and distributed over a BBS or computer network. While a computer may no t differentiate between subject matter of pictures, the law does. Persons responsible for distributing child pornography could be prosecuted, and such a suit could result in $50,000 or more in fines and damages. If Sam Slammer did try to distribute the pictures he made of the last time he had sex with a minor, his distribution of those pictures over a computer information system could result in a prosecution for child abuse.
Another issue raised by section 2252 is possession of pornographic material. Anyone who "knowingly possesses 3 or more books, magazines, periodicals, films, video tapes, or other matter which contain any visual depiction [of child pornography] that has been mailed, or has been shipped or transported in interstate or foreign commerce, or which was produced using materials which have been mailed or so shipped or transported, by means including computer" can be fined and imprisoned for up to five year s. While the requirement of knowledge may insulate some computer information systems such as networks, it clearly does not protect computer users who knowingly traffic in pornographic material stored in computer files. Thus, if Sam were distributing pornographic pictures in and out of his computer account, he could be charged under section 2252 with transporting material used in child pornography. He would probably need to be caught with three pictures in his account at the time, but it is likely t hat a prosecutor could ask a System Operator to look through any back-ups of the computer data which was in Sam's account at an earlier time. Typically, a System Operator will make a backup copy of all of the data stored on a computer system. This is do ne so that if the computer should malfunction, the information can be restored by use of this backup. Backups are often kept for a while before being erased, in essence freezing all of the users' accounts as they were at a time in the past. If pictures were also found in the backups, a claim could be made that Sam was in possession of these pictures as well. This would be an easy claim to make if Sam had the ability to ask the SYSOP to recover any of the files that are on these back-ups, but which are no longer in his actual account. Based on the public policy against child pornography, it is likely that an attempt would be made in order to hold Sam responsible for the knowing possession of any files that were formerly in his account which could still be recovered from the System Operator's backups of Sam's data. However, if such a claim were to be attempted, it would also need to be shown that Sam knew of the accessibility of these backups, since the statute requires the *knowing* possession of the pictures. As to Samantha Sysop's liability, unless she knew what was stored in Sam's account, it is unlikely that she would be held liable for having child pornography stored on her computer system. Section 2252, as quoted above, contains a knowledge requirement. If Samantha Sysop did not know what was in Sam's account, she would not meet that knowledge requirement. If she had reason to know that Sam had pictures of child pornography in his account, but intentionally turned her back, she may be considered to have constructive knowledge of the presence of the pornographic material on her system, and therefore she could be charged with the knowing possession of the material. It is not likely to make a difference that the material is in Sam's account; Sam's account is still on Samantha's computer system which she is responsible for maintaining in a legal manner.
Child pornographers, or pedophiles, may use bulletin board systems and E-mail for more than just storing and transporting pictures. There has been some publicity over bulletin boards being used by pedophiles to contact each other. Law enforcement us e of bulletin board systems to track down pedophiles has not resulted in prosecutions of System Operators, but there have been convictions of BBS users who have arranged to make "snuff films" through contacts they have made over a computer.
Some areas of "computer crime" are regulated. Computer crime is an issue which computer information system operators should be aware of, as they may be on the receiving end at some point. The term "computer crime" covers a number of offenses, such as: the unauthorized accessing of a computer system; the unauthorized accessing of a computer to gain certain kinds of information (such as defense information or financial records); accessing a computer and removing, damaging, or preventing access to data without authorization; trafficking in stolen computer passwords; spreading computer viruses; and a number of other related offenses. All of these are activities which are often referred to as "hacking."
The first federal computer crime law, entitled the Counterfeit Access Device and Computer Fraud and Abuse Act of 1984, was passed in October of 1984.
"[T]he Act made it a felony knowingly to access a computer without authorization, or in excess of authorization, in order to obtain classified United States defense or foreign relations information with the intent or reason to believe that such information would be used to harm the United States or to advantage a foreign nation."
Access to obtain information from financial records of a financial institution or in a consumer file of a credit reporting agency was also outlawed. Access to use, destroy, modify or disclose information found in a computer system, (as well as to pre vent authorized use of any computer used for government business if such a use would interfere with the government's use of the computer) was also made illegal. The 1984 Act had several shortcomings, and was revised in The Computer Fraud and Abuse Ac t of 1986. The 1986 Act added three new crimes - a computer fraud offense, modeled after federal mail and wire fraud statutes; an offense for the alteration, damage or destruction of information contained in a "federal interest computer;" and an offense for trafficking in computer passwords under some circumstances. Even the knowing and intentional possession of a sufficient amount of counterfeit or unauthorized "access devices" is illegal. This statute has been interpreted to cover computer passwords "which may be used to access computers to wrongfully obtain things of value, such as telephone and credit card services."
The Computer Fraud and Abuse Act presents a powerful weapon for SYSOPs whose computers have been violated by hackers. The first person charged with violating the Act, Robert T. Morris Jr., was charged with releasing a "worm" onto a section of the Internet computer network, causing numerous government and university computers to either "crash" or become "catatonic." Morris is the son of the Chief Scientist at the National Security Agency's National Computer Security Center. His father is also a former researcher at AT&T's Bell Laboratories where he worked on the original UNIX operating system. UNIX is the operating system that many mainframe computers use. Morris claims that the purpose of his worm program was to demonstrate security defects and the inadequacies of network security, not to cause harm. However, due to a small error in his worm program, it got out of control and caused numerous computers to require maintenance to eliminate the worm at costs ranging from $200 t o $53,000. District Judge Munson read the Computer Fraud and Abuse Act, as it appeared at the time, largely as defining a strict liability crime. The relevant language applied to someone who:
"(5) intentionally accesses a Federal interest computer without authorization, and by means of one or more instances of such conduct alters, damages, or destroys information in any such Federal interest computer, or prevents authorized use of any such computer or information, and thereby -
(A) causes loss ... of a value aggregating $1,000 or more .... Judge Munson's interpretation is that this language requires intent only to access the computer, not intent to cause actual damage."
On appeal, Munson's reading was affirmed by the Court of Appeals, and the Supreme Court refused to hear further appeals.
Morris' lawyer, Thomas Guidoboni, described the statute as "perilously vague" because it treats intruders who do not cause any harm just as severely as computer terrorists. While the Judge's interpretation of the statute makes it a more powerful weapon in a prosecutor's corner, Guidoboni argues that Munson's interpretation violates the sense of fairness that underlies the U.S. criminal justice system, which almost always differentiates between people who intend to cause harm and those who do not.[171 ] No one seems to argue that what Morris did was *right*, but many do not agree that he should be charged with a felony although he was convicted.
The jury in the Morris case indicated that the most difficult question was whether Morris' access to the Internet was unauthorized even though defense counsel pointed out that 2 million subscribers had the same access. This section was recently clarified in the Computer Abuse Amendments of 1994 The section is re-written, and these amendments broaden the scope of the protection offered in section 1030 (a) (5) (A) in order to close a loophole contained in the earlier Act. "[I]ntentionally accesses a Federal interest computer" is no longer used, and instead the section applies to anyone who "through means of a computer used in interstate commerce or communications, knowingly causes the transmission of a program, information, code, or command to a computer or computer system ...." As amended, the section now protects not only Federal interest computers, but it also covers privately owned computer systems, used in interstate commerce or communication, but which may be affected by someone acting through means of a computer located within the same state as the affected computer. The amendments also remove the "access" requirement from the statute. Instead, a specific intent to perform certain acts which may constitute direct or indirect access are put into the statute. Significantly, the statute also adds a requirement that there be either a specific intent or reckless disregard as to whether the transmission will cause damage or withhold or deny use of a "computer, computer system, network, information, data, or program" in excess of the user's authorization. These changes should help to prevent access and intent questions raised by the Morris incident.
Two other changes which the Computer Abuse Amendments Act of 1994 make is to allow for civil remedies caused by a violation of section 1030, and it provides specific protection for actions which modify or impair information or computers used in medical examination or treatment.
One of the favorite targets of computer hackers is the telephone company. Telephone systems are susceptible to computer hackers' illegal use. By breaking into the telephone company's computer, hackers can then place free long distance calls to other computers. They can also break into telephone companies' computers and get lists of telephone credit card numbers. Trafficking of stolen credit card numbers and other kinds of telecommunications fraud costs long distance carriers about $1.2 billion annually. Distribution of fraudulently procured long distance codes is often accomplished over bulletin board systems, or by publication in electronic journals put out by hackers over computer networks. The major protection for the telephone companies is found in section 1343 of the Mail Fraud Chapter of the U.S. Code. This section prohibits the use of wires, radio or television in order to fraudulently deprive a party of money or property. This statute has been held to include fraudulent use of telephone services. Presumably, this statute may also cover fraudulent theft of computer services when the computer is accessed by wire. Computer information systems that knowingly distribute information aiding in wire fraud could be charged with conspiracy to violate section 1346 of the Mail Fraud Chapter, which specifically covers schemes to defraud. Some state laws exist to punish theft of local telephone service or publication of telephone access codes.
As pointed out in the introduction, computer viruses are increasingly of concern - both for operators of computer information systems, as well as for users of the systems. But what is a virus? A virus refers to any sort of destructive computer program, though the term is usually reserved for the most dangerous ones. Computer virus crime involves an intent to cause damage, "akin to vandalism on a small scale, or terrorism on a grand scale." Viruses can spread through networked computers or by s haring disks between computers. Viruses cause damage by either attacking another file or by simply filling up the computer's memory or by using up the computer's processor power. There are a number of different types of viruses, but one of the f actors common to most of them is that they all copy themselves (or parts of themselves). Viruses are, in essence, self-replicating.
Also discussed earlier was a "pseudo-virus," called a worm. People in the computer industry do not agree on the distinctions between worms and viruses. Regardless, a worm is a program specifically designed to move through networks. A worm may have constructive purposes, such as to find machines with free resources that could be more efficiently used, but usually a worm is used to disable or slow down computers. More specifically, worms are defined as, "computer virus programs ... [which] propagate on a computer network without the aid of an unwitting human accomplice. These programs move of their own volition based upon stored knowledge of the network structure."
Another type of virus is the "Trojan Horse." These are viruses which hide inside another seemingly harmless program. Once the Trojan Horse program is used on the computer system, the virus spreads. The virus type which has gained the most fame recently has been the Time Bomb, which is a delayed action virus of some type. This type of virus has gained notoriety as a result of the Michelangelo virus. A virus designed to erase the hard drives of people using IBM compatible computers on t he artist's birthday, Michelangelo was so prevalent, it was even distributed accidentally by some software publishers when the software developers' computers became infected.
One concern many have about statutes dealing with computer viruses is the problem that the statutes need some kind of intent requirement. Without some sort of intent requirement, virus statutes may be sufficiently overbroad so as to cover defective computer programs.
What legal remedies are available for virus attacks? Distributing a virus affecting computers used substantially by the government or financial institutions is a federal crime under the Computer Fraud and Abuse Act. If a virus also involves unauthorized access to an electronic communications system involving interstate commerce, the Electronic Communications Privacy Act may come into play. Most states have statutes that make it a crime to intentionally interfere with a computer system. The se statutes will often cover viruses as well as other forms of computer crime. State statutes generally work by affecting any of ten different areas:
1. Expanded definitions of "property" to include computer data. 2. Prohibiting unlawful destruction of computer files. 3. Prohibiting use of a computer to commit, aid or abet commission of a crime. 4. Creating crimes against intellectual property. 5. Prohibiting knowing or unauthorized use of a computer or computer services. 6. Prohibiting unauthorized copying of computer data. 7. Prohibiting the prevention of authorized use. 8. Prohibiting unlawful insertion of material into a computer or network. 9. Creating crimes like "Voyeurism"- Unauthorized entry into a computer system just to see what is there. 10. "Taking possession" of or exerting control of a computer or software.
SYSOPs must also worry about being liable to their users as a result of viruses which cause a disruption in service. Service outages caused by viruses or by shutdowns to prevent the spreading of viruses could result in a breach of contract where continua l service is guaranteed; however, contract provisions could provide for excuse or deferral of obligation in the event of disruption of service by a virus.
Similarly, SYSOPs are open to tort suits caused by negligent virus control. "[A SYSOP] might still be found liable on the ground that, in its role as operator of a computer system or network, it failed to use due care to prevent foreseeable damage, to warn of potential dangers, or to take reasonable steps to limit or control the damage once the dangers were realized." The nature of "care" still has not been defined by court or statute. But still, it is likely that a court would find that a provider is liable for failure to take precautions against viruses when precautions are likely to be needed. SYSOPs are also likely to be held liable for not treating files they know are infected. Taking precautions against viruses would be likely to reduce the chances or degree of liability.
System Operators need to worry about damage caused by hackers as well as damage caused by viruses. While hackers are liable for the damage they cause, SYSOPs may find themselves on the receiving end of a tort suit for being negligent in securing their computer information system. For a SYSOP to be found negligent, there must first be a duty of care to the user who is injured by the hacker. There must then be a breach of that duty  - the SYSOP must display conduct "which falls below the standard established by law for the protection of others against unreasonable risk of harm." Simply put, the SYSOP must do what is generally expected of someone in his or her position in order to protect users from problems a normal user would expect to be protected against. Events that the SYSOP could not have prevented - or foreseen and planned for - will not result in liability. A SYSOP's duty "may be defined as a duty to select and implement security provisions, to monitor their effectiveness, and t o maintain the provisions in accordance with changing security needs." SYSOPs should be aware of the type of information stored in their systems, what kind of security is needed for the services they provide, and what users are authorized to use what data and which services. SYSOPs also have a duty to explain to each user the extent of his or her authorization to use the computer information service.
The same analysis applies to operator-caused problems. If the SYSOP accidentally deletes data belonging to a user or negligently maintains the computer system, resulting in damage, he or she would be liable to the user to the same extent as he or she would be from hacker damage that occurred due to negligence.
Copyright © 1994 - 1995 by P-Law, Inc., and Kenneth M. Perry, Esq. All rights reserved. Reproduction is permitted so long as no charge is made for copies, no copies are placed on any electronic online service or database for which there is a fee other than a flat access charge, there is no alteration and this copyright notice is included.
Return to Table of Contents