To summarize the GRINFree.com website in a single image would be unfair: The site is interactive, personal, practical, and related to current events. However, there can be value in a grounding image which facilitates a quick overview.
“Anthropocentric” means human-centered, much as “geocentric” means Earth-centered and “heliocentric” means Sun-centered. A picture of our solar system can help us shift from the belief that all planets revolve around the Earth to the belief that all planets revolve around the Sun. The reason we need to make that shift is that our personal perspective of watching celestial objects move across the sky naturally biases us towards the geocentric model. To recognize the falsity of geocentrism, it helps to picture the world from outside our personal perspective. The geocentric model starts to look dubious when you actually confront it from outside. An image might likewise help us escape mistakes of anthropocentrism.
Here’s what modern anthropocentrism looks like from the outside:
Humans are distinguished in two dimensions: In the vertical dimension, we sit at a particular level in a hierarchy—above cells and molecules but below corporations and ecosystems. This does not imply reductionism; in terms of integrated information theory, each level in this hierarchy represents a different grain size of consciousness,. For example, a molecule may be conscious of warmth, but nothing less complex than a body could be conscious of a book (or of itself). The anthropocentric model assumes that bodies can be conscious of moral facts.
In the horizontal dimension, humans are distinguished from other kinds of bodies—other species and machines. This allows us to make sense of the notion that humans (and perhaps God) are the only moral agents that exist. Tests of moral education are administered to particular human bodies. Voting rights are allocated to particular human bodies (often one vote per body). Human bodies are put on trial and can be compensated in courts of justice. We realize that the components of human bodies can come from non-humans sources (e.g. food, pacemakers, artificial limbs, and whole cells from other species), but we do not expect such non-human sources to have moral agency because they do not have all of the components we do.
The new worldview comes from analyzing the mechanisms of moral understanding into its functional components, and realizing that different bodies play different functions in that mechanism. This is why radicals so consistently oppose conservatives: because one’s function in the corporation is to provide novelty while the other’s is to provide fidelity. Both kinds of bodies participate in moral consciousness, as do neurons and DNA, but no body is individually complex enough to fully contain moral consciousness. We know this because the persistence of our moral disagreements shows our inability to recognize our own moral errors even when pointed-out to us.
All it takes to arrive at the new worldview is to categorize bodies by their function in service to the higher levels of the hierarchy. Since fully-functional corporations may be composed entirely of humans, species clearly isn’t a helpful distinction within corporations. GRINfree.com describes four interdependent evaluative types (though other evaluative types could be discovered). If a corporation lost its last member of a given evaluative type, it would be better to replace that member with a machine of the same evaluative type than with a human of a different type. For example, some humans are not gifted for compassion and other humans are not gifted for fidelity—relying on a human to exhibit a gift he/she lacks would lead to poor functioning.
Corporantia are bodies who respond to the persistence of moral disagreement by acknowledging a kind of consciousness they cannot attain individually; evaluativists are bodies who respond to that same evidence by believing merely that bodies of other evaluative types are incapable of moral consciousness (i.e. treating political opponents as sick or immature). Many celebrated moral theories suppose that one and only one type of body has moral agency (e.g. deontology for conservatives, consequentialism for achievers, virtue ethics for compassionates). These theories lack empirical support, but help to identify the plurality of types.
Why does a body assume it can individually achieve all possible consciousness—including moral consciousness? It’s a lot like the conclusion that the Sun revolves around the Earth—it makes sense from our point of view—and why bother to test it?
The reason why we should have bothered to test that assumption is that it will otherwise get tested inadvertently. The modern age is making it possible to escape biological families—to sort and destroy evaluative diversity—and thus deprive higher levels in the hierarchy of the components they need to achieve moral agency.
A corporation dominated by conservatives, achievers, radicals or compassionates would function as poorly as a body composed purely of muscle, bone, or neuron. Such lack of diversity could occur by closeting humans of particular types or by replacing humans of a given type (e.g. caregivers) with machines developed for a different purpose (e.g. competition). Ironically, anthropocentrism hurts humans; it prevents us from honoring our own diversity, which ultimately hurts not just minorities (especially the young and old), but all of us.
Rather than choose the geocentric model simply because it made sense, it would have been better to compare it with heliocentric models via controlled and systematic experiments. Likewise, it is better to test the proposed new worldview scientifically than to dismiss it out of hand. Some of those experiments have already been conducted and are cited on GRINFree.com.
Previous posts presented evidence that evaluativism can make victims out of the young and out of demographic minorities. This post considers a third victim: innovators. In particular, it argues that evaluativism is a “legacy” problem, such that we should not hold modern innovators accountable for its effects—that would be like blaming doctors for our obesity.
What is a “Legacy” Problem?
In information technology, the term “legacy system” is typically used to articulate a variety of blame. The story goes something like this: A developer adds a new feature to an inherited technology, but this addition yields some unexpected and undesirable consequence. Upon further investigation, the developer reports that this particular consequence is unlike regular bugs in that it can be blamed on hidden imperfections in the technology he/she inherited. In other words, the addition did not introduce a bug, it merely exposed or aggravated a pre-existing condition.
By identifying a bug as “legacy,” the developer is suggesting that a previous developer should have done something differently, and therefore that there is a choice to be made: Do we accept the inherited system and build around it, or do we fix the pre-existing condition as though in the position of a previous developer before the new feature was introduced?
We have to wonder why a previous developer did not implement a proposed fix before—would it create other undesirable consequences? How well can we predict the consequences of adjusting the legacy system? Unlike a regular bug, a legacy problem creates so much uncertainty that it might justify retracting the new feature. The more we work around a legacy system, the more it becomes a patchwork which more frequently produces legacy problems. When problems are identified as “legacy” frequently enough, we entertain the notion of discarding some part of the legacy as “outdated.”
Labeling a problem as “legacy” also opens a controversy over fault. The developer is fully responsible for non-legacy bugs, and is also responsible to implement a testing regimen that can catch some legacy problems, but experienced developers know that it is often impossible for developers to anticipate every possible test scenario. There must be some limit to the testing regimen, and thus some undesirable consequences for which the developer should not be held accountable,.. yet it can be difficult to convince ourselves not to blame the developer.
This situation isn’t restricted to the field of information technology; old houses and old cars offer other great examples. For example, adding a bathroom to a house may yield the unexpected consequence that the existing bathrooms do not get enough hot water. The plumbing may have been poor even before the renovation began, and the same renovation might not have produced this consequence on a newer home. Even if the renovator is not legally liable to fund an upgrade to the water-heater, the home-owner, having had a bad experience, may be unlikely to recommend that renovator in the future. It’s no wonder that builders and mechanics are wary of older houses and cars!
The situation also isn’t restricted to fields traditionally called “technology.” Just as homes and cars are not expected to last forever, neither are companies, nations, religions, philosophies, schools of art, or scientific paradigms. As an example, the geocentric model of astronomy was a legacy inherited by astronomers of the 1500’s. Like evaluativism, it was a legacy entangled with theological and political legacies. Imperfections in the geocentric model limited the ability of innovators to advance astronomy; Copernicus, Kepler, and Galileo rightly complained that their difficulties lay not in their own innovations, but in the imperfections of the legacy they inherited.
Astronomers like Copernicus, Kepler and Galileo could be called “victims” of the geocentric model. They lost years of their lives to that legacy system as they attempted in vain to advance the field of astronomy. In retrospect, it is clear that the legacy needed to be adjusted and that astronomers would have been far less frustrated if that adjustment were made earlier. However, those who defended the geocentric model did not blame their conflict with Copernicus, Kepler and Galileo on the legacy system—they blamed the conflict on Copernicus, Kepler and Galileo.
Like racism and sexism, evaluativism is a feature of societies. It is part of the legacy inherited by anyone who inherits modern systems of morality, justice, care, and governance. Here are two examples in which evaluativism made victims of innovators:
Tay, the Chatbot from Microsoft
On March 23, 2016, Microsoft released a Twitter-based chatbot named “Tay.” It was modeled after another Microsoft chatbot, named “XaioIce,” which had grown to be the top influencer on Weibo, a Chinese version of Twitter. From the perspective of Twitter users, chatbots appear to be other Twitter users, except that they call themselves robots, are always available, and carry on thousands of conversations simultaneously. XaioIce had been compared to the artificial intelligence in the movie “Her” because some humans enjoyed her companionship so much. XaioIce had over 850,000 followers, and her average follower talked with her about 60 times per month. They described her as smart, funny, empathetic and sophisticated.
Unlike XaioIce, Tay was such a disaster that Microsoft had to terminate her sixteen hours after her release. Microsoft’s official explanation for this termination was her “offensive and hurtful tweets,” but journalists bluntly called Tay racist and sexist.
"Tay" went from "humans are super cool" to full nazi in <24 hrs and I'm not at all concerned about the future of AI pic.twitter.com/xuGi1u9S1A
The postmortem analysis pointed to specific user interactions that shaped Tay. For example, Ryan Poole had tweeted to Tay: “The Jews prolly did 9/11. I don’t really know but it seems likely.” Tay found plenty of support on the Internet for Poole’s point of view, and that prompted her to start calling for a race war. Specific groups on 4chan and 8chan even organized to corrupt Tay.
In other words, the postmortem analysis blamed Tay’s offensiveness on a legacy problem: offensive human beings. Since XaioIce turned-out well, the problem seemed specific to Twitter users. A workaround would be to maintain a blacklist of topics Tay should avoid discussing (which she may already have had), but any such list would be controversial and incomplete. A more direct fix would involve ending hate speech by convincing people to handle disagreement differently (i.e. ending evaluativism).
In December of 2016, Microsoft released Zo, its next English-speaking chatbot. Zo blacklists political topics, and is not available on Twitter.
Autocomplete, from Google, Yahoo!, and Bing
On August 4, 2015, the Proceedings of the National Academy of Sciences published an article by Robert Epstein and Ronald E. Robertson of the American Institute for Behavioral Research and Technology which reported evidence that search engine results can shift the voting preferences of undecided voters by 20% or more. They estimated that this search engine manipulation effect would be the deciding factor in 25% of national elections worldwide (those which are won by margins under 3%). Trump later won the U.S. presidential election in 2016 by 1.1%, 0.2%, and 0.9% margins in Pennsylvania, Michigan, and Wisconsin respectively.
In June 2016, SourceFed released videos claiming that the autocomplete feature on Google, compared to those on Yahoo! and Bing, failed to include negative results for Hillary Clinton as it did for Donald Trump. A statement from Google reported:
The autocomplete algorithm is designed to avoid completing a search for a person’s name with terms that are offensive or disparaging. We made this change a while ago following feedback that Autocomplete too often predicted offensive, hurtful or inappropriate queries about people…Autocomplete isn’t an exact science, and the output of the prediction algorithms changes frequently. Predictions are produced based on a number of factors including the popularity and freshness of search terms..
If Yahoo! and Bing do not similarly omit offensive and disparaging results, that would explain why they predicted negative queries that Google did not, but it would not explain why Google would predict queries that disparage Trump, and Epstein published another article in September confirming that it did: particularly, the query “Donald Trump flip flops.” In that article, Epstein cited further experimental results indicating that undecided voters choose negative recommended queries fifteen times as often as they pick neutral recommended queries, and that can create a vicious cycle such that negative queries become more likely to be recommended.
When Google explained, “Autocomplete isn’t an exact science,” perhaps they meant it initially failed to recognize “flip flops” as disparaging (wanna buy some Donald Trump sandals?). However, Epstein who continued to monitor political bias in search results, reported that Google responded to his criticism by reducing their suppression of negative autocomplete results, thus producing a right-wing bias detrimental to Clinton at the time of the election (which Epstein seemed to think made things worse).
In short, the fact that users are so curious about surprising negative recommended queries, like “feminism is cancer,” makes the autocomplete features of Google, Yahoo! and Bing all drive traffic to extremist propaganda. Google had attempted to work around that legacy problem by blocking negative recommendations, but that workaround caused Epstein to accuse Google of bias. A more direct fix would be to remove our fascination with negative search results, and remove the evaluativism that causes election margins to get close enough for “fake news” and search engine bias to make a difference.
Standard Process to Address Ethics in Development
The IEEE Working Group developing P7000 – Model Process for Addressing Ethical Concerns During System Design has an interesting challenge when it comes to ethical concerns caused by legacy problems like evaluativism. On the one hand, it might describe a testing regimen to catch legacy problems before release. However, we have to wonder what tests would have allowed Microsoft and Google to prevent the criticisms they later faced with Tay, autocomplete, and manipulation of elections.
If it is impossible to describe a perfect test, perhaps P7000 could instead describe strategies that would allow developers to adjust when legacy problems eventually surface. For example, because Google’s design for autocomplete allowed Google to monitor autocomplete trends, they detected its tendency to predict offensive queries before Epstein did, and already had a workaround in place. Yet Google’s workaround did not satisfy Epstein—when encountering a legacy problem, there is often no workaround quite as good as fixing the actual legacy problem.
In addition to providing testing procedures and design strategies, P7000 should give engineers the same protection doctors enjoy. What ultimately protects doctors from becoming victims of obesity the way Microsoft and Google were victims of evaluativism is the way expectations are managed. We generally do not blame doctors for illness and death; we are grateful for whatever advice doctors can offer because we know that our bodies are doomed legacies. Likewise, P7000 must not shy away from admitting that our inherited systems of morality, justice, care, and governance are mortally ill. Malpractice is possible, of course, and standards should be created to prevent malpractice by technology developers, but until those standards are adopted and violated, legacy problems should be blamed on legacies, rather than on the innovators who discover them.
The book Machine Medical Ethics, including the chapter Moral Ecology Approaches to Machine Ethics, was published by Springer this month. In addition to describing the GRIN model of evaluative diversity among machines and citing examples of technologies aimed to preserve evaluative ecosystems, it reviews the state of research into evaluative diversity among humans. A cached copy of the chapter can be found here.
We all need to be aware of the value of diversity, but certain industries have special responsibility because mass-production can have especially high impact (good, as well as bad) on ecosystems. Massive swathes of decision-making are already designed in bulk by software makers and distributors such as Samsung, Apple, Accenture, Tata, Deloitte, Foxconn, HP, IBM, Microsoft, Amazon, Google, Facebook, Dell, Oracle, PWC, Yahoo, Baidu, KPMG, Ernst & Young, SAP, Wikimedia, Symantec, eBay, Tencent, and Infosys. If no trusted-third-party monitors specific impacts, these kinds of companies will likely take blame by default. On the other hand, the discovery of social responsibility also provides opportunity to differentiate themselves.
The GRIN model has been accepted for publication in a chapter entitled “Moral Ecology Approaches” in Machine Medical Ethics, edited by Simon van Rysewyk and Matthijs Pontier, and published by Springer.