We need to work harder to make software engineering more ethical

Image via Pexels.com

Last week’s news that a study from North Carolina State University found that a code of ethics didn’t seem to make a difference in decisions made by IT professionals was a depressing development for those of us looking for ways to make meaningful progress in the development and regulation of ethical technology.

The code of ethics used in the study belonged to the Association for Computing Machines, the largest IT company in the world. Researchers asked 168 people (professional developers and graduate engineering students) ethical questions about 11 different scenarios. Half saw the ACM code of ethics before the exercise, while the others were simply told that it was important to think about ethics. It’s not the most sophisticated study, but the researchers found that in this case there were no significant differences in the responses of the two groups.

Software developers are an integral part of protecting our critical infrastructure. They write the algorithms that make increasingly important decisions about people’s lives. They help protect our information from hackers. These are the people who will determine how computers, drones, banks, criminal convictions, predictive policing and surveillance work. They determine how machine learning will be implemented. They change the world.

It’s time we stopped making sterile lists of do’s and don’ts and instead put our energy into something with some bite. Let’s stop writing general guidelines and start being fiercely specific, where we can, about the formal rules and consequences of bad behavior. Do no harm, take no bribes, be fair to others – these are obvious facts that apply to any profession.

Academic and professional societies are in a unique position to formulate these codes, but they evolve very slowly (often because of “too many cooks in the kitchen”) and have no real power to implement them. Unlike the American Bar Association, for example, they cannot ban an engineer from practicing if he behaves in a deeply unethical way; and I doubt anyone writing malicious code would be harmed by being purged from membership in an academic society.

My fellow Forbes contributor Patrick Lin has been involved in these discussions for a long time. He is director of the Ethics + Emerging Sciences group at Cal Poly and an affiliate researcher at Stanford’s Center for Internet and Society. Asked about the current code conundrum, he replied that in order to progress, two essential things are needed:

First, people need to understand and appreciate the issues, whether it’s why ethics matter in engineering or why weapon x is bad and should be banned. It is above all about education. Second, a code of ethics, or a treaty, must have teeth or provide a clear sanction for non-compliance. It’s a question of execution. It doesn’t have to be direct physical or financial punishment, but even peer pressure can be quite effective. It could mean anything from an industrial culture that won’t hire or promote an engineer who doesn’t care about ethics, to swift condemnation when a nation breaks with customary international norms.

Certainly, there are many obstacles to an effective code of ethics that truly governs the behavior of software engineers: there is no single group responsible for their behavior, the rules must be applied on a global scale to to be maximally efficient, ethical conduct is not straightforward.

Too often, we’ve let this stop us from doing anything and become paralyzed in the face of technology that overtakes politics day by day. Even a piecemeal approach that addresses some of the most baffling issues we face could be a useful case study.

Why not start by looking at algorithmic bias, for example, because this problem affects more people every day? Summarize research on How? ‘Or’ What algorithms discriminate; shed light on areas where they are already at play in our daily lives; name the companies that create and sell these algorithms (especially those that will not share their proprietary code); reveal which businesses, police departments, banks, etc. have implemented these algorithms – i.e. showing people that it matters and requires intervention.

Ethical guidelines for software developers writing this type of code should go beyond the golden rule. A guideline should be less like “Be aware of possible biases” and more like “All software developers should educate themselves about programming biases and their potential harms, examine their plan and subsequent code for such bias, be able explain how they have addressed these biases, be willing to subject their code to scrutiny, claim responsibility for flaws found in the code that could lead to bias, and address concerns raised by oversight committees.

We should require developers to seek education and be genuinely concerned that their work can cause harm if they are not vigilant. The best way to achieve this is to integrate teaching ethics in engineering education and institute continuous training in the workplace. But, as Lin points out, we’re not doing enough:

Teaching ethics seems to be gaining momentum, but it can’t be a one-off course requirement, but something that’s part of the culture – it has to be part of their DNA. And teaching why ethics or benevolence should matter to anyone not already inclined to think that way is a very difficult task. It’s not an impossible job, but I don’t see many engineering programs devoting enough resources to this super basic question, which is really what engineering is all about: improving our lives.

Add to that the problem that many companies that are already operational and profitable will need more incentives to add a continuing education component to their infrastructure.

It is perhaps the ‘soft law’ approach that has the most potential – these are quasi-legal regulations that govern behavior and can be adopted by governments, businesses, professional societies, etc. A good example is a proposal from a lawyer Gary Marchant and ethicist Wendell Wallach called the Governance Coordination Committee (CCG), described here and in Wallach’s book A dangerous master (Basic Books, 2015). It is essentially a group of widely respected experts who hold enough seriousness to be taken seriously by entities who can then impose their recommendations on citizens, employees, students, etc. There is, of course, the potential for these to become homogenous groups with paternalistic tendencies. , but if the members are diverse enough, there’s a much better chance they’ll be taken seriously.

Any group that merges and sees itself as an ethical authority will have to be very nimble, lest it end up with a vague, neutral code of conduct that reiterates much of what we’ve seen before and what researchers from the North Carolina State study showed. be largely ineffective.

More importantly, ethical regulators will need to communicate frequently and effectively with the public and provide a feedback mechanism. Writing in popular outlets, providing open access to resources on the web will be of utmost importance. The process must be visible and transparent at every step. That doesn’t mean that every nutcase reaction deserves a response, but people need to be made aware of the existence of the group, the participants, the issues they are working on, and the importance of those issues.

Bland and generic codes of ethics clearly let us down. The Volkswagen scandal in which software developers created a way to cheat emissions ended up costing the company around $25 billion and at least two the engineers were imprisoned for their roles. Both Facebook and Google are facing massive backlash over code that manipulates and discriminates against people. All of these developers were technically bound by a code of ethics which clearly did not influence their behavior.

Gordon K. Morehouse