Computers, Information Technology, the Internet, Ethics, Society and Human Values

Philip Pecorino, Ph.D.

Queensborough Community College,  CUNY

Chapter 9 Information Technologies and Accountability


Is there any truth to the “Myth of Amoral Computer Programming”?   Is there no one responsible for a complex computer program that many people worked on to develop and test?  If not, then how is responsibility to be determined?  How are computer programmers to act?   Why?  What ethical principles are used to support your position?



Jane Johnson discovers that people have been posting some pretty nasty things about her on a bulletin board that are not the truth at all.  She wants it stopped and she wants the falsehoods removed.  Who is responsible for allowing unknown number of people to read false reports about Jane?   Do the ISP operators do anything immoral in permitting the postings?  Maintaining the postings?

Third Party Defamation Liability for ISP

ISP and Bulletin Board Responsibility


The following are remarks, reflections and responses to issues and questions related to this matters in this chapter.  Each offering is proceeded by the authors name and institutional affiliation.


Chris Murphy, CUNY, SPS, 2007

Accountable, Liable and Moral have to be understood properly before we apply them.  Accountability is simply the concept that there is some tie between an action and an outcome, where by the outcome follows from the action.  Liable means that some action has, by its subsequent outcome, played some part in harm to a person or a thing as defined by the law.  Moral actions follow from ethical principles that define for us what we must to in order to secure the GOOD.  

Accountability: Designers, Programmers, Manufactures and Distributors.

From our definition we can see that everyone is accountable for the outcome of their action in some way, whether that action’s end be far removed or recent in time need not matter.  Accountability is outline by the end goal of some action and gives us some variables to work with when determining who might be liable for some negative outcome.  The parties involved might be designing a program to run a pacemaker, of which we say the end goal might be to keep the heart running of patient as long as the rest of the body cooperates.  Let us say, however, that the program fails resulting in the loss of life. We must now assess accountability, namely whose actions resulted in this outcome.  If all involved did what was needed to complete the task at hand and instead some unforeseeable circumstance caused the program malfunction then we might judge all equally accountable, a decision of liability must be left until later.  Assuming that the loss of life is greater than this anomaly then we must ration that some greater error was made in the process.  From this point there are two possible findings in the process of discerning accountability; neglect and intent to harm.  One may find that a particular party was neglectful in some aspect of the process, or that one party intended for some harm to happen.  At this important juncture we can say that some action with respect to a portion of those involved have diverted from the end goal of the project, leading from mere accountability to a question of liability.  

Liability: Designers, Programmers, Manufactures and Distributors.

Liability is strictly the question of accountability as it applies to the law.  That said questions of liability are based on the laws of a particular area, and for this essay we shall not go into particulars.  As a general note, if one is accountable for some GOOD, or at least what is not determined to be harmful from a legal standpoint, we do not concern ourselves with this person.  Only when accountability involves some harm do we, one: assess if there is a law pertaining to it, and two: analyze the terms of liability under this law.   From our first circumstance, where all are equally accountable from some harm that is a cause of an unforeseeable event, we may determine that there is no liability.  A similar example from recent events comes from the 10-year-old boy who accidentally started a fire in California that destroyed hundreds of homes causing 100’s of millions of dollars in damage.  The DA’s office determined that he was accountable but could not be held liable because his age limited his ability to truly comprehend the possible outcome.  One of the major factors that the DA had to determine was, did the child obtain the understanding and intent to cause the damage from the fires?  A similar consideration comes into play in our second software scenario. Where intent to harm is involved, we might leverage accountability on the entire group but only liability on the parties with intent. Our third possibility from harm from the software creators is neglect, of which we could not apply to the child who starts fires because of his inability to understand the consequences, yet the same cannot be argued for the creators of this software.  In essence all liability follows from accountability for harmful actions as the law defines them.

Morals: Designers, Programmers, Manufactures and Distributors.

Ethical principles define for us what to do, and in doing so lay out our morals.  Some ethical principles may be more concerned with outcome (Utilitarianism) and some with intent (Kantian) but the goal is the same, to tell us what to do in order to achieve what is good.  Any of the groups discussed are, in the cases of neglect or harmful intent, acting immorally.  From Kantian standpoint neither is treating the patient as an end but instead, either a means to profit or some personal vantage.   From a Utilitarian view the neglectful action creates less good for the society as a whole than the diligent one, and therefore is immoral.  So too does the intent to harm create and imbalance in good as compared to the opposite action and consequently can also be judged immoral.  

An important difference to note between accountability and morality is that we are only accountable after the fact, where morals can be applied before the actions.  This leaves ethics and morals distinct from our considerations of accountability and subsequently liability.  This gives us at least some insight into why laws and not ethical principles rule our modern society.

Joseph Snellenberg, CUNY, SPS, 2007

As wonderful and helpful as technology is, there still exists an amount of trust between consumer and manufacturer in regards to product reliability and satisfaction. In today’s world, this trust refers to how well a product works and if it should fail or not meet consumer satisfaction, then the manufacturer is expected to take some form of accountability for the failure. For software companies, the expectations from consumers—in regards to how well a product performs—is much higher due to the importance and serious complications surrounding faulty or problematic software. For example, if a piece of software is found to contain a line of code that poses a threat to computers, the company is at risk of being in legal and even ethical troubles. In most cases, the blame is placed on a select group of individuals within the company: the developers of the software.

            Honesty does go a long way, but in terms of software, “It would be ridiculous to expect vendors to tell literally everything they know about the software they are selling. Most customers…could not understand it. Moreover, much of what a vendor knows is not relevant to the customer’s decision.” (Johnson, Deborah G. Computer Ethics. Third Edition. Prentice Hall. Upper Saddle River, NJ. 2001. Pg.178). Thus, software developers are constantly placed in a position that forces them to ensure that their products also satisfy legal and moral critics as well. However, it is the legal side that is often presented to the public’s eyes as opposed to the more damaging moral side of holding software developers accountable for technology problems. There is a reason why I think both sides are equally important, though. That reason is simply the personal image of the developers. Legal troubles can hamper a developer’s work to supply consumers with working and/or satisfactory products significantly, but moral criticism can do more. If a developer is found guilty of violating ethical standards, not only will it doom the software from being a success, the developer will be financially and emotionally ruined on top of this. In society, breaking legal standards is a serious crime, but not always a fatal wound to the criminal. Breaking moral standards, however, is almost always the final nail in the coffin for someone’s career; it is next to impossible to recover from being found guilty of violating ethical standards because it creates an image that makes the public no longer place any amount of trust in that person. The same can be said for those who break legal standards, but I feel that since the media covers legal troubles so much more than moral problems, breaking legal standards seems to have lost the shock factor it once had.

            Regardless, I do feel that software developers should be held accountable to the maximum extent for their actions when software malfunctions, but with limitations. One limitation is connected to the term casual responsibility. The term is used “…when we say that an individual is responsible for an untoward event, we mean that the individual did something (or failed to do something) that caused the untoward event.” (Johnson, Deborah G. Computer Ethics. Third Edition. Prentice Hall. Upper Saddle River, NJ. 2001. Pg.174). An example of casual responsibility involves the cocky computer user who thinks he or she knows everything and can install and use software without reading instructions or warnings from developers and manufacturers, only encounter numerous problems with the software. In their arrogance, these individuals attempted to fix the problem themselves and created new problems in turn or made the existing problem worse. As a result, they were forced to either take their computer to a professional repair shop or, in a further fit of arrogance, throw out the problematic computer and buy a new one. In this case, these users should be held accountable for their misfortune because they attempted to fix the problem on their own without professional help, even though such help was available. As for developers, this case is something that they cannot fully test or anticipate in lab tests; thus, they are only responsible for releasing a product to the public. The developers are not responsible for the problems caused by overconfident users not paying attention to the instructions and/or recommendations that were released with the software because the developers have no control over user behavior. Now, if the company admits there were problems regarding certain software, then these overly arrogant users are still accountable for their actions, but less than if the software in question has a record of working properly and with little or no complaints about problems. This is because they ignored the warnings from the developers and claimed they could make it work even though the developers publicly stated that the software has problems. The developers are not completely off the hook in this case, though. They are still responsible for releasing a product that had faults and should respond accordingly.

            Another limitation is the case where an anti-virus or anti-spyware program views the software as harmful to a computer and causes some difficulties when running the computer. In this case, the developer who made the software has either not told the developers of the anti-virus/anti-spyware programs about the new software or that the anti-virus/anti-spyware developers have not yet updated their product’s information database. Here, I feel that the software developer does not need to be severely punished for his or her failure to notify a certain anti-virus developer about the release of his or her new software, but rather be asked to smooth things out with the anti-virus developers in the future by improving communications between them through various means, such as giving a copy to the anti-virus developers shortly before the public release. I believe this action in particular could also benefit the consumer because if the anti-virus developers find a serious error or bug in the software, then the software developer can decide whether or not it is a good idea to release his or her product on the scheduled date or push the release back enough to allow time to fix the error. Legally, no crime has been broken here because there was simply a miscommunication outside of the casual computer user’s control. Morally, no moral codes were also broken. The developer released some software, yet some anti-virus software made an incorrect assumption about the new software. As such, this form of miscommunication is something that companies tend to acknowledge often, so existing moral standards are left intact.

            Finally, there is the case where a new piece of software has a conflict with a device (e.g., a printer) that in turn causes problems for the entire computer. The software developer knows that his software may or not be compatible with other devices at the time of the software’s release, thus he decides to put faith in the manufacturers of those devices and hopes that the manufacturers of the device will release compatible models before or by the time that the new software is publicly released. The software developer is only responsible for his own software; the manufacturers of the device are responsible for meeting the requirements that the new software demands. Legally, the software developer has not committed a crime because—as with the anti-virus case—the device manufacturers operate outside of the developer’s control. Furthermore, the software developer cannot influence or help the manufacturer beyond supplying a copy of his software to the manufacturers. After that, all responsibility is on the manufacturers’ shoulders; if they do not release a compatible version of their device, the fault is with the manufacturers not being able to work fast enough. Thus, the software developer is not to blame for any problems that occur and only needs to take responsibility for his software.

            Despite these limitations where problems occur due to external factors, when something serious does happen with software, developers should not simply brush incidents under the carpet and pretend that nothing happened, in my opinion. For example, if I buy some software and that software has certain compatibility issues with other software, then the developers who made it should be responsible for not only helping me to fix the problem, but should learn from my case and work to try to ensure something like this does not happen again. Also, if a certain piece of software does something more than what it should and that extra function is harmful, the developers should be held accountable for that because they put out a product that potentially could be a serious threat to not just the casual user, but to important organizations and businesses.

            An example of this would be a piece of software that helps improve computer performance, but also monitors user behavior (e.g., what programs a user has installed, what websites he or she views online, etc.) without letting the user know that he or she is actually being monitored. Here, the developer also not only knows that there is a hidden function within the software that monitors user behavior, but took that gathered information and used it to make it appear that the developer was appealing to certain individuals. On top of this, the developer denied that any such function existed in the software and accused those who complained of lying. The developer, after a messy court battle, is found guilty of illegally spying on individuals and covering up knowledge of the issue. In this case, both legal and ethical issues are raised because not only did the developer attempt to view private information without consent (the legal side), the developer took this illegally obtained knowledge and used it to his advantage to keep the user in question loyal to the company (the ethical side) on top of denying that any wrongdoing was committed. As a result, the developer should not only take responsibility for creating the software, but should also be punished in the courts for his or her actions. The software developer had advanced knowledge about the monitoring function, yet failed to notify consumers because he felt it was unnecessary to let people know that the developer was spying on them and conducting business based on how his target audience behaved. Thus, the developer should be held accountable for not only breaking the law, but for doing business in an immoral manner.

            A big part of developer accountability lies in pre-existing knowledge of a problem and how the developer handles the disclosure of said problem. In the above example, the developer knew that he had written a piece of software that sent information back to his own computer, yet did nothing and tried to cover up the fact. A similar example involves a developer knowing of a serious glitch or bug in the software, yet instead of taking time out to fix the problem, the developer hopes it will not be a serious problem for consumers and goes ahead and distributes the software. Shortly after the software goes public, people slowly begin to discover the bug in the software and complain to the developer. The developer takes an interesting path here: he admits that there is a bug in the software, but claims the bug did not present a serious threat to computers when the software was being tested. In this case, the developer exhibits an image of negligence and overconfidence in his product’s ability to overcome the bug. As a result, the developer has not broken any legal standards; however, on an ethical level, the developer has broken a moral code by deceiving consumers by acting as if the problem would fix itself and ignoring the severity of the problem. In other words, by neglecting to take the bug into account, the developer deserves to take the blame for underestimating a serious flaw in his own software.

            Overall, I feel that when it comes to developer accountability, the developer should have a certain degree of responsibility in regards to his software and what it can do. The developer alone knows every little detail about the software, so when something goes wrong, the developer should not have a blank stare on his face. If a problem arises, the developer should know where to go to fix it immediately and implement a solution just as fast because simply dodging the problem is only going to create more problems down the road. By denying that a problem exists, the developer not only hurts his reputation, but can exhaust his resources to the point where the developer is out of business. In my opinion, if a problem with software is presented, it will not be that big of a loss if the developer moves a small amount of resources to address the problem because that movement of resources could potentially result in an increase of customers and company profit. The fact that software developers want to deny mistakes or problems with their products more often than confront them makes me feel that ego and pride are more important to developers than satisfying their customers.

turn to next section

Web Surfer's Caveat: These are class notes, intended to comment on readings and amplify class discussion. They should be read as such. They are not intended for publication or general distribution.                @copyright 2006 Philip A. Pecorino                       

Last updated 8-2006                                                              Return to Table of Contents