【Translation】How to Build Technology You Won't Regret

If technology is building the future, have we ever stopped to think about what kind of future we are building? We're focused on moving fast, breaking things, and delivering continuously, but do we take the time to think about who gets left out of what we do?

Maybe the tools we're making won't be used in the way we expect. Or, is it used by far more people than we can imagine -- for example, what would happen if it scaled to Facebook's 2.8 billion monthly active users ?

Building technology responsibly is not just about security, resiliency, and uptime. It also deals with environmental impacts . And privacy . Most importantly, we have to think about our social impact and what we are asking our users to agree to, knowingly or not.

As technology becomes more personal—in our homes, at our jobs, in our cars, even in our bodies—then our responsibility as its creators must also increase. And as the tech industry continues to face a massive talent gap , we have more job security than most and the ability to speak up and ask questions. We should take advantage of this privilege. Everyone in an organization owns the consequences of what we are building, even what we choose to connect to.

In this post, we explain some simple reflections, mechanics, and design thinking to incorporate ethical considerations into your sprint cycle.

Responsible technology starts with commitment.

When antiracist economist Kim Crayton was looking for an organizational strategy to expand belonging and psychological safety in the knowledge economy, she developed four guiding principles .

  • Technology is not neutral. Nor is it apolitical.
  • Intention without strategy is chaos.
  • Lack of inclusion is a risk and crisis management issue.
  • Prioritize the most vulnerable groups.

If we as teams and tech organizations take these axioms as a foundation, does that change the way we look at what we build on top of? How often do we prioritize those who experience the most risk? In order to truly create technology that not only can , but should scale, we have to learn to adapt to uncomfortable situations, Crayton said.

Of course, we have to work with users, or at least find the closest teammates to users -- our colleagues in sales, customer success, developer experience, and bring them to the strategy table. Don’t simply rely on your existing users, but make sure you test and build alongside a larger test bed, intentionally designing the product for a wider group of people who could end up being your users.

Of course, the easiest way to follow these guidelines is to have a diverse, fair and inclusive team to reduce the risk of building something that no one will use -- or in a way you haven't even considered to use. Starting with Crayton's principles is a good way to remember the power and risks that come with building technology.

Responsible technology considers its consequences.

Doteveryone , a former UK-based responsible technology think tank, created a useful technique called Consequence Scanning . It's an agile practice for product teams to continually ensure that what an organization is building is aligned with its culture and values. Its purpose is to take place during the initial ideation, roadmap planning, and feature creation of a product.

You start by answering the following questions about your product.

  • What are the intended and unintended consequences of this product or feature?
  • What are the positive consequences we want to focus on?
  • Which are the consequences we want to mitigate?

We often spend a lot of time setting goals and maintaining a backlog of expected features. If we use behavior-driven development, we also spend time on these intended consequences and user experience, but the unintended consequences make us feel at a loss. Technology often has one or more of these six unintended consequences, Doteveryone found.

  • Lack of understanding of numbers. Unclear business models and policies lead to a lack of understanding of what they "agree to". People willingly hand over their DNA to free ancestry tools to find long-lost relatives, but it has been revealed that police departments are using the data to investigate crimes. The General Data Protection Regulation or GDPR guidelines are now popping up on the screens of all European users, but most of us quickly clear this distraction from our screens, with no further understanding of how these cookies will be used. As women and people of color are underrepresented on IT teams, the result of this lack of understanding is men building health apps that don't include period trackers , or virtual backgrounds decapitating images of darker-skinned users .
  • Unexpected users and use cases. People will always find a new and unexpected way to use your app. What3words was built as a simpler alternative to longitude and latitude, but was used to organize social distancing during Black Lives Matter protests. And some will always find new ways to hurt others, from harassment all the way to state-sponsored election meddling.
  • Reliability, security, monitoring and support are weak . How would you monitor the expansion of a website or service to prevent accidental or unplanned issues such as YouTube becoming an uncontrolled conduit for extremism . Who is verifying the security and stability of the tracing app created by the government for the COVID-19 pandemic ? Why do they crash? Will the government remind us to delete them later?
  • Changes in behavior and norms . The changes range from emoji becoming their own language, to screen addiction, to my four year old trying to swipe the TV screen to change the channel. The use of contactless mobile payments may keep germ-ridden money out of our hands, but how could it change our spending habits ?
  • displaced . This ranges from technological unemployment -- self-checkouts and ATMs replace cashiers, and chatbots replace customer service representatives -- to enabling people to meet, study and work from home.
  • adverse effects on the planet . The IT industry contributes at least 10-12% of carbon emissions . Are we considering turning off video autoplay to help reduce our own product contributions? Are we even measuring our own footprints? Do environmental factors affect where our website is hosted?

We attribute consequences to being negative, but, as you can see above, not all consequences. They're just not often the expected results. It is important to brainstorm, analyze possible consequences and develop a plan to monitor, measure and possibly remediate.

Aoife Spengeman of the Wellcome Data Lab has developed a free workshop on pursuing ethical product development, applying it to evaluating the lab's own algorithms. This process classifies unintended consequences as: .

  • use case . A product is used in a way that both the creator and the user want to use it.
  • stress case . A product is used in its intended way, but it has unintended consequences for users. This is what is commonly referred to as an "edge case".
  • Cases of abuse . The product is being used intentionally by someone other than the way it was designed to be used.
  • Misuse case . A product has been inadvertently used by someone for a purpose other than what it was designed for.

The Wellcome Data Lab used this exercise to evaluate a machine learning tool designed to help internal teams gain insight into how Wellcome-funded research is being cited, with the aim of measuring the impact of research on the public.

The team identified a major risk: Allowing users to misunderstand how the tool works and make assumptions about its accuracy, comprehensiveness, and meaning of its outputs, which could reduce diversity, inclusion, and funding for some projects.

For example, papers written in the global north are disproportionately cited. Without the context provided by Wellcome, this stressful situation could lead funders using this tool to perpetuate these systemic inequalities by funding only the most cited projects. Additionally, the tool is more accurate for English-language and science-focused journals. Users may misuse the tool, thinking that research in other languages ​​or disciplines has less impact, when in fact, it is just less likely to be picked by the algorithm.

The result of this work is an effort to ensure users have a higher understanding of how the algorithm works and its limitations.

As Springerman writes : "No one can prevent the worst from happening, we can do our best to imagine where things could go wrong and do what we can to mitigate the risk."

Responsible technology seeks to minimize harm.

Open source has inherent diversity, equity, and inclusion issues . A 2020 survey by Stack Overflow found that "developers who are male are more likely to want specific new features, while developers who are female are more likely to want changes to the norms of communication on our site." The requirement is often paired with words like "toxic" and "rude". A 2017 GitHub survey , in which only 3% of respondents were female and 1% were non-binary, found that half of respondents had "witnessed bad behavior" ranging from rudeness, name-calling, stereotyping , right down to stalking and outright harassment.

Also, the open source community is distributed and mostly volunteers, or external to the maintaining organization, which makes it inherently harder to control. Also, successful open source projects can grow to the point where unintended consequences and use cases become the norm -- because when you get to Facebook's user scale, there are no edge cases.

Product designer Kat Fukui described her then-employer GitHub as "the largest platform for developers to connect and collaborate" at the QCon event .

But that's not what it was built for. It started out as a technical need, not a human need.

"GitHub wasn't originally intended to be a social network, but here we are, and it is, because a lot of conversation and human interaction revolves around building code," Fukai said.

However, as GitHub scaled to become the largest source code host in the world, the focus shifted and collaboration features were requested.

Shenjing has grown around these responsibilities over the years in GitHub's community and security teams.

  • Make sure the community is healthy.
  • Conduct feature reviews to ensure they do not introduce new vectors of abuse.
  • Fix technical debt.
  • Document and amplify the team's own work to avoid repetition of unintended consequences.

The team was created because, as Fukai says, "When GitHub was founded 10 years ago, we didn't necessarily think about the ways that code collaboration software could be used to harm other people."

So now the job of this cross-functional team is to get creative and figure out how any function can be used to hurt someone else.

According to Fukai, "Building user safety into the foundation of technology is everyone's responsibility. Whether you're a manager, individual contributor, designer, researcher, [or] engineer, it's everyone's rationale...  . . because every platform or feature can and will be abused".

For every feature review, we ask: How could this feature be used to harm someone?

Of course, it's not just the human drive to create a safe space, either. If you don't have the resources in place to act quickly when someone reports abuse, you will lose your users.

Fukui's team applied the agile practice of user stories . A user story is a very brief description of a feature from the user's point of view - as a user of type x , I want y to happen to z for reasons .

In GitHub community teams, user stories are used to identify stress cases. Sara Wachter-Boettcher expands on creating stress case user stories in her book What's Wrong With Technology , especially to help build empathy for how users feel in dire situations, such as fleeing abuse.

The use of the term "stress case" is intentional to humanize fringe cases, Fukui said.

"Even if it happens rarely, a stressful edge case can have a big negative impact and you can lose trust very quickly, especially when it happens openly," she said.

She gave an example: a user story of a stressful case when a user requested a chat in a private place within GitHub. As we all know, direct messages can enhance harassment.

In this user story, Fukai illustrates that users are trying to escape harassing DMs from abusive relationships. So teams traditionally ask the following questions and come up with answers specific to this specific use case.

  • What problem did they have? It's really easy to create "sock puppet accounts" whose sole purpose is to spam or abuse.
  • How are they feeling? Powerless and fearful for their own personal safety and even those around them.
  • What does success look like? Support must have tools to minimize the impact of mass abuse, and users must have the power to block or shut down DMs.

User stories don't just create user empathy, find feature gaps, and align the organization around feature releases, they bring different perspectives and specialized knowledge.

User stories can also serve as a verification point. For example, if you are developing other features that could be abused, then you can refer to this user story in future decisions.

You can then weave your user stories together to build a great technical foundation in the form of security principles or guidelines. Or you can start with these, as this blog post does, and use these criteria to bind all user stores.

As with everything in the Agile lifecycle, it's important to constantly reflect, retrospectively and iterate.

WTF_ethics_ebook.png

Guess you like

Origin blog.csdn.net/community_717/article/details/129646187