I interned on at the Machine Learning Department at Carnegie Mellon University on the Spliddit Team for a twelve week internship during the Summer of 2017.  Our team worked on creating  a web based tool to improve the communication of machine learning results and decision-making that takes place in a group setting. 

Min Kyung Lee (Advisor), Shashank Ohja (software engineering), Chris Sheng (software engineering), and Anruagg Jain (software engineering).

My Role:
Worked as sole designer to contribute to high-level decisions with the rest of the team, defined user journeys and interaction, visual designs, leading/conducting user studies, and co-authoring a paper.

The final deliverables were digital wireframes, working prototype, and a paper in submission to 2018 ACM CHI Conference on Human Factors in Computing Systems. 
The Problem Space
Gaps between Algorithms & Social Contexts 
Algorithms enable efficient, data-driven, scalable social decision making such as allocating work in labor platforms, determining locations of resources in cities, and aggregating citizen's opinions for policy, but ignore important social values such as fairness, altruism, nuance, and context, all of which are difficult to computationally model but vital to social justice and thriving communities.
We worked to identify this gap in algorithmic mediation in a fair division algorithm website, and bridged it by providing a tool that helps people understand algorithmic decisions and adjust them to best fit their context.

Our Concept Goals 
A tool for communication & discussion as a group

We were on the Spliddit Team to help to "humanize" the experience of using (spliddit.org)'s Fair Division Application. In order to help people understand algorithmic decisions and adjust them to best fit their context, we envisioned 3 mains goals for our tool:
Our Process 
 Research + Understanding the Problem 

•  Worked with research scientists and engineers about the Spliddit's algorithms -- what it is, how does it work, and what are the relationships between the inputs and outputs of the algorithm? 
• Investigated previous Research using Spliddit: how have people interacted with Spliddit in groups, what are their painpoints, how is Spliddit different from Discussion-Based Social Divisions? Fairness Perceptions of Algorithmically Mediated vs. Discussion-Based Social Division by Min Kyung Lee and Su Baykal.
• Literature Reviews - what are fair division algorithms, what is economic fairness, how do people divide things in real life, ect?
Defining Terms
What does it mean to be fair in a group?

Fairness, or treating everyone equally or equitably, is fundamental to our society. Even primate animals have been shown to have a sense of fairness, and rebel when it is violated. Two main operationalizations of fairness
are equality and equity:
Equality means treating everyone as an equally-deserving individual regardless of their individual characteristics. 

Equity means responding to individuals based on their preferences, contributions, and merits.

It is often argued that equity maximizes the utility of resources, but the criteria that define who deserves more are subjective to different cultures, as well as individual and organizational values and contexts. 
What are Fair division algorithms?

Fair division algorithms are part of a subfield of computer science that combines artificial intelligence and economics and defines fair division computationally and mathematically. Fair division algorithms define their outcome metric (e.g., the utilities that each agent receives from their division outcome), take individual agents’ input on resources (e.g., preferences or valuations), and allocate resources in a way that satisfies certain fairness properties.

The main benefit of fair division algorithms is that they are proven to guarantee these properties with any combination of agents and inputs. For this reason, these algorithms tend to be used in applications that need predefined fairness criteria, such as organ donation, participatory budgeting, or social division.
Identifying Fundamental Problems ​​​​​​​
Difficulty in quantifying subjective preferences during input phase
Algorithms assume that individual inputs reflect “true” preferences, but the process of quantifying subjective preferences is prone to heuristics and biases.  The outcomes of algorithmic model depend heavily on each group member’s “true” preferences, but sometimes individuals would strategize their input for a variety of reasons which did not always return the outcomes that people desired.

Input Phase of Preferences done individually.  The more you want something, the higher you indicate your "preference"

Algorithms need transparency that is actionable for checking & monitoring
Users receive their results via email with a chart of who received what.  While general explanations of algorithmic fairness was helpful, people still could not make sense of the reason for their algorithmic decision which contribute to perceived unfairness. 

Accounting for Multiple Concepts of "Fairness" in different groups
In the real world, many people value equality in distribution as well as equity, feeling that each person’s
division should be similar to each other, not just maximally satisfy individual preferences. For some, altruism is an important aspect of group satisfaction, even at the expense of compromising some of their own gains.  If there is a reason why someone got more or less there should be an explanation.
Adding Value + Addressing Issues 
To do so, we emphasize two design aspects: transparency and control. 

Transparent Algorithm can promote discussion among group. 
Previous research has indicated that decisions made through discussion were fairer than those mediated by algorithms, but if algorithms were clear enough to understand, perhaps they can promote fruitful discussion?

Control over Algorithm results can clear misconceptions 
This option lets users “probe” at the algorithm’s results by altering the values and observing how that change affects the rest of the group.  This stimulation effect allows users to become more comfortable with the results by directly observing the input/output relationships and providing insights for discussion.   
Challenges & Limitations we faced with our goals
Bridging the gap (in the meantime), not fixing the problem 
Because of the existing technology of the algorithmic functions, we noted that many of the problems, were derived from the limitations of the fundamental algorithmic assumptions themselves (such as the economic fairness assumption), so our ultimately, our design was to alleviate, but not fix the core problems. However, we saw this to be a positive aspect, as our extension encouraged group communication & discussion.


Exploring Different Ways of Visualizing Algorithm's workings
Basic Framework We started with
In effort this gap, we developed a simple goals list to help us address the issues we identified.  
1) to enable user to understand how and why algorithms make certain decisions
2) to identify where algorithms’ assumptions fit or diverge from their own concepts of fairness
3) to adjust algorithmic allocations if desired as a group

These 3 ideas were sectioned to 
• Fairness Properties 
• Breakdown of Results
• Suggest Changes to Result
Screen for Pilot Testing with friends
Testing with Friends + Revisiting our Design 
Some Points taken from our pilot testing with friends in lab members

• Fairness properties still difficult to really understand and apply to reasoning 
• "Suggest Changes" section take too much room for being a small focus
•  Sometimes it was unclear that I could click to change things (Especially in Rent Dashboard)
•  Interactions of Changing results were unclear. 
• "It's a lot to take in at once"

In response to these, we decided to prioritize the visualization of the data itself to speak to users as well as working to make the fairness properties more clear. 
Because the Fairness Properties were unclear, we brainstormed different visualization options.
Moreover, we decided to follow Google's Material Design to leverage the existing component libraries, so my ideas were slightly modified for the final interface used for user testing.  
Challenges & Limitations we faced with Implementation
We used Spliddit’s original interface and algorithm to collect and process user input. Our interface was connected to the Spliddit website, but visualized the algorithmic division outcomes.

We ideated over 20 UI ideas to support human & algorithmic decision-making for the Rent, Goods, and Task Algorithm, synthesizing them according to impact and feasibility with prior pilot studies groups and friends from the lab in order to validate and chose our final "dashboard" for our research study.  However, we due to the complexity time constraints we decided to only implement the Goods interface.   
Concept Walk-Through: 
Interfaces to support human & algorithmic decision making
We direct users to a dashboard scene after their results are calculated.  In this dashboard, users can explore the fairness properties, view other member's results and preferences, and alter results to best fit their context. We hope this will tool will support human & algorithm decision making. 

User Testing 
Evaluating the role of algorithmic transparency and control in perceived fairness of decision-making
To evaluate the role of algorithmic transparency and control in perceived fairness of decision-making, we conducted a within-subjects laboratory experiment in which groups of two to seven people divided snacks using Spliddit and our interface. We ran 71 participants in 23 sessions. 

(For detailed results to our study please contact me for details)
Initial reactions before the interface
Some participants were satisfied with what they received, but unsure how to judge other participants’ results without knowing their preferences. Some participants were unsatisfied with their results because they had received fewer number of items than other participants. This decreased the fairness perception of the group as well, because they had expected a more even distribution of items. Still others initially thought their outcome was fair because they had received items that they rated highly and assumed that the group outcome was fair based on their own results

Effects of transparency
Many understood how the algorithm worked based on the step by step description, and they later used the knowledge to interpret the input and output visualization. At the same
time, the explanation alone did not make people trust the algorithm or see the results as being automatically fair. On the other hand, the input and output visualization, which made everyone’s preferences and outcomes transparent, had a large impact on how people judged the fairness of their outcomes.

Effects of Control over Outcomes
Control, or having a chance to explore the algorithmic allocation by changing one’s own or another group
member’s results, improved perceived fairness of both individual and group outcomes among those who initially
thought them less fair, regardless of the group’s final decision to change or not change the algorithm’s original
Applying to the wider context 
AI + Human Decision Making 

Our work suggests that instead of just applying computationally fair division algorithms and “convincing” people to accept the outcomes, it is important to provide a tool that allows people to see how the decision was made and how they can adjust it for their group. Providing these social tools are important ways of supporting discussion and social awareness of equalities or inequalities, and cultivating fairer outcomes for all.
My Reflections
1) Is my work helpful or harmful? 
Spliddit’s two primary goals are providing access to fair division methods, and outreach.  There were some comments made during our interviews such as "Why would you want an algorithm to make group decisions about things like snacks?"  This was a simple, but pretty good question.  The limitations of the User Studies were precisly because we were using snacks, but its good to think beyond.  How can this algorithm be scaleable to large communities? 

We started this study with the bigger picture of perhaps applying fair division methods to emerging algorithmically-mediated groups, such as virtual teams, crowd work teams, students taking online courses, and other situations where work, rewards or resources are algorithmically distributed for common goals.

2) Always be a learner 😊 
I was really humbled by the talented team of people I was able to work with. I came in with very little knowledge about the process of machine learning or about the mathematical formulas, but I had the help of multiple peers, researchers, and professors guiding and mentoring me.

3) How does my work affect actual people? 
I found it incredibly exciting to talk with real users (over 70 participants) from a wide range of backgrounds and ages to get their feedback on the topic.  Their feedback was essential and crucial to pin point the gaps between what we thought was right and what people really thought about the interactions.  
Back to Top