Team: Min Kyung Lee, Shashank Ohja, Chris Sheng, and Anruagg Jain 
My role: UX/UI concept development, visual design, user research with (70+ People)
Timeframe: 12 weeks, Summer 2017
01 Project Background: Perceptions of Algorithmic Decisions
Despite being mathematically fair in splitting everyday division problems, Spliddit, public website that employs fair algorithms to find solutions to everyday division problems such as splitting goods with a group of people, does not always achieve “fair” results in human perspectives. This points out gaps between algorithmic assumptions and human behaviors, which contribute to perceived unfairness. For this project, my team and I redefined the user experience with the goal providing transparency and control over algorithmic results to our users.
Research on Current State
First, we tried to understand why it was so difficult to trust and develop actions out of the suggestions that Spliddit provided.  We researched algorithmic mediation in group decisions in literature reviews, gathered information about user's experience with the website, and learned from the creators of Spliddit. 
From our research, we identified painpoints that users faced using Spliddit.  
We then looked at the steps of the user journey on Spliddit by examining the interactions from creating a new division to receiving the allocation results and identified a few painpoints in the user journey.  
We then dug deeper to understand the backend of the algorithm to see what it was storing, hiding, and showing the users.  This helped us to identify where user's started to distrust the algorithm.
Why does Spliddit seam unfair?
Through mapping out user's experience, we found that the structure of Spliddit ignores important social values such as fairness, altruism, nuance, and context, all of which are difficult to computationally model but vital to social justice and thriving communities.  As a results, many people find it difficult to trust and develop actions out of the suggestions.  
1) Hiding of Group Member's Preferences
Spliddit's system of division hid other group member's preferences (their input preferences in the slider bar phase).  This led to the users distrusting the algorithm and even distrusting other group members. 
2) Hiding of how results were made
While general explanations of algorithmic fairness was helpful, people still could not make sense of the reason for their algorithmic decision.  Transparency in a decision-making process can increase perception of procedural fairness
3) Inability to alter algorithmic results
People are social beings, and different groups can have different concepts of fairness.   For some, altruism is an important aspect of group satisfaction, even at the expense of compromising some of their own gains.  Changing algorithmic results to fit the goals of the group were absent.

Web UI 
Developing a digital tool to understand, check, and control algorithmic decisions with the group.
We identified core issues that groups faced within decision making with Spliddit and developed the concept of Spliddit Dashboard tool, which emphasize two design aspects: 
1. Transparency -  the ability to visualize the input & out puts of the algorithm 
2. Control - the ability to correct and check algorithm decision with the group 
3. Guided Learning - step-by-step visualization of algorithmic process
To begin with, the we wanted to provide users with the transparency of seeing each other's preferences and results.  This was done through showing the inputed numbers in the input phase.  
The break it down, the results panel is in the format of a spreadsheet.  The numbers inside are the inputed numbers that users put in the input phase.  
We also saw the importance of how the algorithm worked to the users without using mathematical jargon of fairness properties but through a step-by-step process that describes how the algorithm processes inputs using a scale metaphor.   (shown below) click here for full illustration
Exploring ways of who has control of results changes.
On the Spliddit Dashboard, we envisioned different ways users could interact with the group.  Different groups have different ways of interacting with each other. 
1. Group adjust outcomes at the same time 
2. One user at a time adjust the results 
3. Each user changers their own stimulation 
For our user study we decided to structure the system as first allowing users to explore on their own, then choosing an allocation that they think is fair, then finally, discussing as a group to adjust results for the final allocation.  
New Framework of Interactions
Through the new framework, we added the ability to see other group member's results, explore on your own before finalizing with the group members for group discussion.
Animations of Interactive Changes and Fairness Properties.

User Testing : 71 Participants (23 Sessions)
To evaluate the role of algorithmic transparency and control in perceived fairness of decision-making, we conducted a within-subjects laboratory experiment in which groups of  2 to 7 people divided snacks using Spliddit and our interface. We ran 71 participants in 23 sessions. 

Task: Dividing snacks
We used snacks for the goods division algorithm.  To ensure that some snacks would be more desirable than others, we chose snacks with a wide range of prices. 
A study session took between 40 and 70 minutes, depending on the size of the group, and each participant was compensated $10. Participants worked in groups ranging from two to seven people.

In order to measure the impact of transparency and control on perceived fairness, we conducted three surveys with identical measures throughout the experiment.
1. Baseline Phase - participants then added their inputs to Spliddit individually &  given results                                                  2. Transparency phase  -  displayed the interface (with each participant's preferences) for all participants to see 3. Interactive control phase -  participants were given the option to alter the algorithm’s results indiv & as group            4. Interview & demographic survey -  interview with each individual in a separate room.
The results from our within-subjects laboratory experiment suggest algorithmic transparency had a paradoxical effect, increasing perceived fairness among those who initially found their outcomes less fair, but decreasing perceived fairness among those who initially found their outcomes fair.
Having control over algorithmic results universally increased perceived fairness of the outcomes regardless of whether the groups actually changed the allocations or not.
Transparency shifts accountability from algorithms to users
Because the input and output visualization made people understand the exact role of their input, they were able to acknowledge their own responsibility for the outcomes, and this made them perceive the outcomes as fairer.

Transparency allows for disagreements
As we intended, transparency also provided a basis for people to determine and articulate whether their fairness concepts aligned with the algorithm’s.

Bringing together social and mathematical fairness
As intended, people explored other allocations and discussed them; some groups (22%) did change the algorithmic allocation to make it more fair for their group than what algorithm originally provided.

Empathy toward algorithmic outcomes
An algorithm automates decisions efficiently, but does not provide any new experience for users. Control gave people a chance to do the task themselves, which helped people realize the inherent limitations in some situations.
Applying to the wider context 
AI + Human Decision Making 

Algorithms enable efficient, data-driven, scalable management of social functions, and this vision is one of the driving forces behind the increasing adoption of algorithms. On the other hand, emerging studies suggest that algorithms cannot yet accommodate diverse social values such as fairness, altruism, nuance, and context, all of which are difficult to computationally model but vital to social justice and thriving communities. While human decisions do accommodate these factors, human decision-making is time- and resource-intensive, difficult or impossible to scale in certain contexts, less beholden to data, and more prone to biases caused by inherent power structures or interpersonal factors, all of which may create undesirable effects and inequities.
Our work suggests that instead of just applying computationally fair division algorithms and putting it upon the users has a negative affect on decision making.  Rather, it is important to provide a social tool that allows people to see how the decision was made and how they can adjust it for their group. Providing these social tools are important ways of supporting discussion and social awareness of equalities or inequalities, and cultivating fairer outcomes for all.
Moving Forward
Now that we know that these group adjustments improved the fairness perceptions of the group, I then began to see if I could improve the visual design of the dashboard.