ADUniverse: Evaluating the Feasibility of (Affordable) Accessory Dwelling Units in Seattle

A collection of photos of types of affordable dwelling units

Types of accessory dwelling units (click to enlarge any image)

Project leads: Rick Mohler, Associate Professor, Department of Architecture, University of Washington; and Nick Welch, Senior Planner, City of Seattle Office of Planning and Community Development

Data science lead: Joseph Hellerstein

DSSG fellows: Emily A. Finchum-Mason, Yuanhao Niu, Adrian Mikelangelo Tullock, Anagha Uppal

Project Summary: Seattle has the nation’s seventh most expensive housing market and third-largest homeless population despite being only its 18th-largest city. Exacerbating this challenge is the fact that three-quarters of the city’s land zoned for housing is reserved for low-density, detached single- family dwellings that few households can afford.

However, this offers an opportunity to increase Seattle’s affordable housing stock through homeowner-developed accessory dwelling units (ADUs)—small, separate residences within or behind single-family homes. ADUs provide access to high-opportunity neighborhoods, have relatively lower rents, allow older adults to age in place, and provide supplemental income for homeowners. Despite these benefits, fewer than two percent of Seattle’s 135,000 single-family lots currently have an ADU. Regulatory, financial, design, and permitting challenges stymie ADU production. Reducing these barriers can increase production and improve our housing landscape.

This project will advance that goal by providing a citywide feasibility analysis and a prototype web-based interactive tool that estimates parcel-level suitability for a detached ADU given property characteristics, housing submarket, and neighborhood-level socioeconomic conditions. The tool would enable individual homeowners to assess the feasibility of building an ADU, and aid nonprofits and policymakers in exploring ADUs as part of an anti-displacement strategy. In particular, the DSSG team will conduct an analysis for policymakers currently considering the development of a program to provide ADU financing for lower-income homeowners struggling with rising costs in exchange for renting the ADU to a lower-income tenant.

This project will integrate data on factors that influence the physical and financial feasibility of constructing a detached ADU, including property data from the King County Assessor, environmental data from the City of Seattle, real estate market data, and spatial analysis of site conditions.  It will involve geospatial analysis, econometric analysis, model building and interface design using a range of public data sources that must be joined and rendered interoperable.

Project Outcomes: The team successfully developed the prototype application to enable homeowners and other users to determine the parcel-level physical and financial feasibility of developing an ADU. In addition, the team created a dashboard comparing demographic data with ADU production on a neighborhood-by-neighborhood basis that can help policymakers and housing advocates more effectively to evaluate and shape City-sponsored or privately funded programs related to using ADUs an affordable housing strategy.

The property research tool takes homeowners through a step-by-step feasibility analysis, starting with binary prerequisite factors for constructing a DADU, such as zoning, lot size, lot dimensions, and lot coverage,. The tool then evaluates more nuanced secondary factors, such as the amount of tree canopy, steep slope conditions, and potential side sewer conflicts. While it cannot be definitive about all these secondary factors, the tool presents information about key property characteristics, bringing these issues to the homeowner’s attention.

The challenges associated with financing an ADU are another barrier this tool seeks to address. After evaluating physical viability, the user proceeds to an evaluation of financial feasibility that considers the type and size of ADU and provides a preliminary assessment of project costs, with individual costs broken out. The tool then prompts the homeowner to how much they might borrow based on the estimated project cost and, assuming an interest rate for a home equity line of credit, determines a monthly debt payment and estimates additional property taxes. Using data from Zillow, the tool estimates the anticipated monthly rent, allowing the homeowner to complete a preliminary cost/benefit analysis.

In addition, the application is intended to promote the development of ADUs by connecting existing ADU owners with those considering their development.  To this end, the tool allows homeowners to see the number and location of existing nearby ADUs.  The intention is to allow ADU owners to share experience and offer advice to those in the planning stages and to build a city-wide community around ADUs while being sensitive to the desire of many homeowners to maintain privacy.

Multiple opportunities for stakeholder engagement punctuated the process. The team toured an existing ADU and DADU and interviewed a homeowner who had recently completed an ADU. The team presented a preliminary version of the application to the Seattle Planning Commission, which is a partner in the project. Over the course of the project, the Seattle City Council discussed and ultimately passed the country’s most progressive ADU legislation; the team was present at the Council hearing in which the legislation was unanimously passed allowing them to hear public testimony both pro and con and individual Councilmembers’ arguments in favor of the legislation.

Subsequently, Mayor Jenny Durkan signed the legislation into law and issued an Executive Order specifically citing the DSSG project and directing Seattle IT to explore further developing and officially launching the application. At the conclusion of the program, the team met with key staff of Seattle IT to present the application and provide them with code for further development. Seattle IT is currently evaluating the cost of building on the team’s work and launching a public-facing tool and will provide this information to the Mayor for further consideration.

The project received honorable mention in the Research & Innovation category at the American Institute of Architects (AIA) Seattle chapter’s Honor Awards for Washington Architecture in November 2019.

Learn more by viewing the final presentation slides and project website. View a video of all four final presentations here.

Developing an Algorithmic Equity Toolkit with Government, Advocates, and Community Partners

A graphic of tools in a toolbox with text that reads: Algorithmic Equity Toolkit

Project lead: Mike Katell, PhD Candidate, UW Information School

Data science lead: Bernease Herman

Faculty Advisor: Peaks Krafft, Senior Research Fellow, Oxford Internet Institute

Community Engagement Lead: Meg Young, PhD Candidate, University of Washington Information School

DSSG fellows: Corinne Bintz, Vivian Guetler, Daniella Raz, Aaron Tam

Project Summary: In partnership with ACLU of Washington, fellows will create an Algorithmic Equity Toolkit—a set of tools for identifying and auditing public sector algorithmic systems, especially automated decision-making and predictive technologies. Extensive evidence demonstrates harms across highly varied applications of machine learning; for instance, automated bail decisions reflect systemic racial bias, facial recognition accuracy rates are lowest for women and people of color, and algorithmically supported hiring decisions show evidence of gender bias. Federal, state, and local policy does not adequately address these risks, even as governments adopt new technologies. The toolkit will include both technical and policy components: (1) an interactive tool that illustrates the relationship between how machine learning models are trained and adverse social impacts; (2) a technical questionnaire for policymakers and non-experts to identify algorithmic systems and their attributes; and (3) a stepwise evaluation procedure for surfacing the social context of a given system, its technical failure modes (i.e., potential for not working correctly, such as false positives), and its social failure modes (i.e. its potential for discrimination when working correctly). The toolkit will be designed to serve (i) employees in state and local government seeking to surface the potential for algorithmic bias in publicly operated systems and (ii) members of advocacy and grassroots organizations concerned with the social justice implications of public sector technology.

In this project, fellows will gain hands-on experience using and auditing machine learning algorithms in surveillance contexts, with a focus on understanding the inner workings of these algorithms. To center the perspectives of those directly affected by such systems, the team will also engage community partners’ input on the Algorithmic Equity Toolkit throughout its development.

This project touches on issues and methodologies relevant to a wide range of fields, such as law, public policy, sociology, anthropology, design, social justice, data science, programming, artificial intelligence, geography, mathematics, statistics, critical gender and race studies, history, urban studies, and philosophy.

Project Outcomes: We iteratively developed our toolkit through a participatory design process that engaged data science experts, community partners, and policy advocates, while also drawing upon an array of prior literature and similar toolkit efforts. Initially, based on the regulatory focus of prior academic research, we envisioned that the primary users of the Algorithmic Equity Toolkit would be employees in state and local government seeking to surface the potential for algorithmic bias in existing systems. We thought advocacy and grassroots organizations could also find the toolkit useful for understanding the social justice implications of public sector systems.

Through our participatory design process, we refined our audience and design goals to focus on helping civil rights advocates and community activists—rather than state employees—identify and audit algorithmic systems embedded in public-sector technology, including surveillance technology. We achieve this goal through three toolkit components: (1) A flowchart designed for lay users for identifying algorithmic systems and their functions; (2) A Question Asking Tool (QAT) for surfacing the key issues of social and political concern for a given system. These tools together reveal a system’s technical failure modes (i.e., potential for not working correctly, such as false positives), and its social failure modes (i.e. its potential for discrimination when working correctly); and (3) An interactive web tool that illustrates the  underlying mechanics of facial recognition systems, such as the relationship between how models are trained and adverse social impacts. In creating our own toolkit, we followed a weekly prototyping schedule interspersed with stakeholder feedback and co-design sessions.

The underlying questions that drove our design of these components were: What ethical issues should civil rights advocates be concerned with in regards to surveillance and automated decision systems? How are algorithmic systems reinforcing bias and discrimination? What do community organizers and non-tech experts understand about algorithmic tools and their impacts? What should they know about surveillance and automated decision systems to identify them and know how they work? In comparison to existing resources, which tend to target software engineers and in some cases policymakers as an audience, we focused on policy advocates and community activists as users.

Learn more by viewing the final presentation slides and the project website. View a video of all four final presentations here. Read the paper, Toward Situated Interventions for Algorithmic Equity: Lessons from the Field, presented at the January 2020 Association for Computing Machinery (ACM) Conference on Fairness, Accountability, and Transparency.

Understanding Congestion Pricing, Travel Behavior, and Price Sensitivity


Project lead: Mark Hallenbeck, Director, Washington State Transportation Center, University of Washington

Data science lead: Vaughn Iverson

I photo of I-405 that shows the pricing of express toll lanes with cars underneath. AP photo/Elaine Thompson

AP Photo/Elaine Thompson

DSSG fellows: Shirley Leung, Cory McCartan, Christopher (CJ) Robinson, Kiana Roshan Zamir

Project Summary: Traffic congestion is a worldwide issue with environmental, health, and economic impacts.  We cannot build our way out of it. Funding is limited and land for building new roads is finite. Building bigger roads in urban areas simply leads to more traffic and more congestion. Therefore, transportation authorities need new ways to optimize the performance of existing and future roadways. Many regions like the Puget Sound are exploring a variety of congestion pricing schemes to both generate revenue for roadway system improvements and to more sensibly manage traffic flow.

The Washington State Department of Transportation has partnered with the DSSG program to better understand travel behavior on I-405’s congestion priced Express Lanes. The DSSG team is examining: the time savings achieved by I-405 Express Lanes users; how those time savings vary given facility price; and how socio-demographic, geographic, and mode choice factors affect how the benefits and costs of the Express Lanes are distributed. Finally, DSSG will also examine how user behavior affects roadway performance. The intent is to provide WSDOT with important information needed to understand the effectiveness of current congestion pricing policies and guide future pricing policy decisions.

Project Outcomes:

Who uses the HOT lanes? The team estimates that the median household income of noncommercial HOT lane users is approximately $101,000 per year, around 20% higher than the median household income of King and Snohomish counties. Fifteen trips are made for every 1,000 households in the region making $50,000 per year, while 35 trips are made for every 1,000 households making $200,000 per year, a 133% increase. Yet most facility trips are not made by high-income households. The team estimates that over 80% of facility users have an income below $200,000 per year.

Who benefits from the HOT lanes? Higher-income households accrue far more net benefit than lower-income households do. The average $200,000-per-year household in the region gains around 6.5 cents in net benefit from the facility, 86% higher than the 3.5 cents gained by the average household making $50,000 per year. This pattern is almost to be expected, however, given the income differential in usage. Households which use the facility more often naturally accrue more benefits. In fact, the same $200,000-per-year household which takes in 86% more benefits than a $50,000-per-year household uses the facility 133% more. This discrepancy becomes much more visible when net benefits are considered on a per-trip basis.

Who benefits per trip? The equity picture looks much different per trip. Lower-income households actually gain more net benefit per trip than do higher-income households. Returning to the same hypothetical households above, at $200,000 and $50,000, the former gains 21% less in net benefit per trip than the latter does. Overall, however, the distribution is quite even: nearly all drivers can expect a per-trip net benefit between $1.50 and $2.50. Of course, there is substantial variation in net benefit across specific trips; the findings presented here are merely averages across a large number of users and trips.

Learn more and view supporting graphs and final presentation slides on the project website. View a video of all four final presentations here. The project was subsequently mentioned in the 10/16/19 KOMO News article, Washington to study tolling for low-income drivers, considers subsidizing rides.

Natural Language Processing for Peer Support in Online Mental Health Communities

hand holding cell phone

Alex Ware on Unsplash

Project leads: Tim Althoff, Assistant Professor, Computer Science & Engineering, University of Washington; and Dave Atkins, Research Professor, Psychiatry and Behavioral Sciences, University of Washington

Data science lead: Valentina Staneva

DSSG fellows: Shweta Chopra, David Nathan Lang, Kelly McMeekin

Project Summary: Everyone encounters challenges in life – whether that includes stress from school or work or more significant problems like depression and addiction.  When we hit these challenges, we often reach out to friends and family for emotional support and problem-solving help. Peer support is an extension of this and has a long and well-researched history as a first line of intervention for mental health and addiction problems. Traditionally, peer support was in-person, but technology advances mean that support can be ‘crowdsourced’ and scaffolded and thus taken to scale.

However, by its nature, peers are not licensed counselors, and thus, it is critical to provide feedback on what really works and is helpful vs. what might be very well-intentioned… but not so helpful.  Our Data Science for the Social Good project will focus on using data from an online peer support platform to better understand what types of responses are the most helpful to young adults sharing their struggles online. We will pursue this objective by analyzing a large-scale dataset of around 100 million posts and interactions. We then want to use these insights to develop tools and trainings for peers to help them be as helpful as they can when supporting others in need.

Project Outcomes: The Peer Support team met a number of key objectives during our 2019 DSSG project. First, our team used data from a real-world, online peer support platform, including over 20 million posts and responses. A key initial task was accessing, comprehending, and filtering this large amount of data. Specifically, we developed a series of selection rules to narrow down the data to focus on posts and responses dealing with mental health and related concerns.

A second task relied on natural language processing (NLP) tools for summarizing large text corpora to answer the foundational question: What are the posts and responses focused on? What are people on the platform requesting help for? Using latent dirichlet allocation (also called, topic models), our team summarized major content themes, revealing that expressions of hopelessness and despair were very prevalent, as were encouragements in response.

A third important task was using social media like indicators within the platform to identify and evaluate markers and proxies of helpful behavior. There were no ground truth assessments of helpfulness, and thus, we used indicators from the platform such as self-reported changes in mood, “likes”, and expressions of gratitude derived from the text of posts and responses.

Finally, our team used machine learning and NLP models to predict helpfulness indicators using text-based and derived features. The text-based features in particular begin to describe what helpful responses look like in an online peer support environment. A summary of our work and outcomes was provided to our key stakeholders, and future work will continue to develop how NLP-based methods can help inform maximizing the help of online peer support platforms.

Learn more by viewing the final presentation slides and project website. View a video of all four final presentations here.