Data By and For the People

I’m the rare human that loves public speaking. Yes I get nervous, of course, but I also get a huge charge out of it. So this Slack from my coworker Susy had a special amount of serotonin accompanying it:

I soon huddled with another colleague Rosana (like Susy, Rosana’s also far more familiar with crowdsourcing than me). I got into my best Michael Krasny consultant curiosity groove to beat back the imposter syndrome and hopefully helped in crafting the panel topic: Crowdsourcing: Data By and For the People, to be hosted fittingly at Mozilla’s Community Space in SF.

Fortunately the conference organizer Epi loved our topic, and we enlisted Megan to be on the panel, along with Christian Cotichni of HeroX, Nathaniel Gates of Alegion, and Anurag Batra of Google.

Since the link to our panel doesn’t include what we wrote up to describe it, I’m pasting it here so you can get a sense of what it was really about:

Per CSW’s website, by “engaging a ‘crowd’ or group for a common goal — often innovation, problem solving, or efficiency,” crowdsourcing can “provide organizations with access to new ideas and solutions, deeper consumer engagement, opportunities for co-creation, optimization of tasks, and reduced costs.” 

But is this a fair value exchange for everyone involved? The above solves a number of problems for companies, but do they help contributors?  And what role does crowdsourcing play in social equity?

As products and services increasingly incorporate Artificial Intelligence (AI), crowdsourcing has a critical role to play in ensuring new technologies and algorithms serve society equally. To quote The Verge: “Data is critical to building great AI — so much so, that researchers in the field compare it to coal during the Industrial Revolution. Those that have it will steam ahead. Those that don’t will be left in the dust. In the current AI boom, it’s obvious who has it: tech giants like Google, Facebook, and Baidu.” If we build the next generation of AI apps using data from a few select players, we risk creating a world that serves the needs of a few corporate entities vs. the needs of all.

If we crowdsource data to train the next generation of AIs, we stand in a much better position to deliver products and services that incorporate the needs of many vs. a few.

This panel will explore how different organizations are approaching crowdsourcing, and dive into the specific implications around rewarding contributors, and the social responsibility of organizations who use crowdsourcing. 

We organized a prep call which went great – we got into some of the thorny topics, surfaced some healthy panel-bait discomfort. But by far the most memorable part was at the end, when, one of the panelists (we’ll let the reader guess) announced s/he had to “go to another part of campus” and “just wanted” to say that the published topic – the one that we just prepped for, Crowdsourcing: Data By and For the People – really shouldn’t be about ethics at all, because nothing really “goes anywhere” from ethics discussions. Instead, we should delve into the “intricacies of crowdsourcing itself.” 

Just before s/he then dashed off to grab a campus bicycle, I reminded the call that the organizer loved it, and I was super grateful that another panelist chimed in to say the topic was precisely why s/he  agreed to be on the panel.

I quickly developed a strong energy for day-of-show.

And it went fine; granted, we were one of just a few panels that weren’t in the main building so: away from all traffic, and, at the tail end of the conference at 3:00pm on a Friday. So we were heartened by the ten or so folks who did show up and listened attentively.

We tackled the time this way:

  • How do you tie into crowdsourcing? 
  • How do you see contributors benefiting?
  • How about the economics?
  • How about ownership and meaningful influence?

And the takeaway? Our closing point was:  if you get others’ data, use it only for the intended use case. And as Megan reminds us, “be sure the intended use case is clear; “consent” doesn’t mean anything if people don’t understand what they’re opting into. And if it changes, that’s okay! Just let people know and require them to consent again.”

Personally I’m quite gratified we didn’t decide to unilaterally change the terms of service on our panel topic, either.

UX vs. DX

I was introduced to Estelle Weyl through my colleague Ali, who suggested Estelle as a speaker for Mozilla’s speaker series. I was intrigued with Estelle’s teaching on the differences between how we as humans perceive the speed and performance of our web browsers (vs. the precise, technical “reality”).

She was of course great, and her final slide also called out another important distinction:

So how fun was it when, a bit over a year later, she invited me to moderate a panel on, yep:

https://forwardjs.com/schedule

We had Tomomi Imura of Slack on board, and were super fortunate to recruit Sarah Federman (newly) of Atlassian and Jina Anne.

This group was so amazing, they agreed to meet on a holiday before the event to huddle. It was there that the subtitle emerged:

We realize we hadn’t intended it, and while we didn’t want to make the Lakoff mistake, we did think it was cool.

So, whiskey it was.

Oh also, the conversation was as great as these women. Estelle and Timomi had previously posted different ways to tackle this. Estelle defines DX as “the methodologies, processes, and tools (frameworks, libraries, node modules, pre- and post-processors, build tools, other third-party scripts, and APIs) by which developers develop web applications for their user base.”

And, because developers are often users too (think developer tools, and of course, frameworks), Timomi approaches their DX in a way that exhorts developer tool makers to keep the developer experience – as users – in mind.

So as a group, we broke this down further, looking at why some developers may be tempted to not think about the UX (whether those users are developers or not, per above), and instead adopt a “resume-driven development” approach (h/t Estelle again) that favors them showing off knowledge of sexy new frameworks vs. delivering a solid UX.

There are also work culture pressures to deprioritize UX. Ship fast or first or cheap, user-be-whatevered, can be a hard force to combat when it comes from management.

But, as others pointed out, developers can still make the choice to not be overly-reliant on tools or frameworks so they can choose the best route for the end-users. Individual engineers can ask forgiveness vs. permission in adopting a user-centric, front-loaded design approach from the start. Finally, to steal (again) from Estelle:

Taking the time to do it right the first time is “fast to code”. Refactoring is not. If you keep the six areas of concern — user experience, performance, accessibility, internationalization, privacy, and security — at top of mind while developing, your finished product will be usable, fast, accessible, internationalizable, private, and secure, without much effort.

Estelle Weyl

CATS

I love ForwardJS and the opportunity it gives me to explore something different and significant. This year I teamed up with my Mozilla colleague Chris Riley, interviewing him on the future of Internet policy (which he heads up for us).

An overview is here, Chris’ full recap here, and the the video here — but here’s the spoiler: Competition, Algorithms, Tracking and Security.

Of course this is not meant to diss the Zig.p.s. some of my other ForwardJS talks: Creating a Strong Geek Culture, Being Effective in a Virtual World, and Building Trust Before Code: Unpacking the Weekly Retrospective.