Data By and For the People

I’m the rare human that loves public speaking. Yes I get nervous, of course, but I also get a huge charge out of it. So this Slack from my coworker Susy had a special amount of serotonin accompanying it:

I soon huddled with another colleague Rosana (like Susy, Rosana’s also far more familiar with crowdsourcing than me). I got into my best Michael Krasny consultant curiosity groove to beat back the imposter syndrome and hopefully helped in crafting the panel topic: Crowdsourcing: Data By and For the People, to be hosted fittingly at Mozilla’s Community Space in SF.

Fortunately the conference organizer Epi loved our topic, and we enlisted Megan to be on the panel, along with Christian Cotichni of HeroX, Nathaniel Gates of Alegion, and Anurag Batra of Google.

Since the link to our panel doesn’t include what we wrote up to describe it, I’m pasting it here so you can get a sense of what it was really about:

Per CSW’s website, by “engaging a ‘crowd’ or group for a common goal — often innovation, problem solving, or efficiency,” crowdsourcing can “provide organizations with access to new ideas and solutions, deeper consumer engagement, opportunities for co-creation, optimization of tasks, and reduced costs.” 

But is this a fair value exchange for everyone involved? The above solves a number of problems for companies, but do they help contributors?  And what role does crowdsourcing play in social equity?

As products and services increasingly incorporate Artificial Intelligence (AI), crowdsourcing has a critical role to play in ensuring new technologies and algorithms serve society equally. To quote The Verge: “Data is critical to building great AI — so much so, that researchers in the field compare it to coal during the Industrial Revolution. Those that have it will steam ahead. Those that don’t will be left in the dust. In the current AI boom, it’s obvious who has it: tech giants like Google, Facebook, and Baidu.” If we build the next generation of AI apps using data from a few select players, we risk creating a world that serves the needs of a few corporate entities vs. the needs of all.

If we crowdsource data to train the next generation of AIs, we stand in a much better position to deliver products and services that incorporate the needs of many vs. a few.

This panel will explore how different organizations are approaching crowdsourcing, and dive into the specific implications around rewarding contributors, and the social responsibility of organizations who use crowdsourcing. 

We organized a prep call which went great – we got into some of the thorny topics, surfaced some healthy panel-bait discomfort. But by far the most memorable part was at the end, when, one of the panelists (we’ll let the reader guess) announced s/he had to “go to another part of campus” and “just wanted” to say that the published topic – the one that we just prepped for, Crowdsourcing: Data By and For the People – really shouldn’t be about ethics at all, because nothing really “goes anywhere” from ethics discussions. Instead, we should delve into the “intricacies of crowdsourcing itself.” 

Just before s/he then dashed off to grab a campus bicycle, I reminded the call that the organizer loved it, and I was super grateful that another panelist chimed in to say the topic was precisely why s/he  agreed to be on the panel.

I quickly developed a strong energy for day-of-show.

And it went fine; granted, we were one of just a few panels that weren’t in the main building so: away from all traffic, and, at the tail end of the conference at 3:00pm on a Friday. So we were heartened by the ten or so folks who did show up and listened attentively.

We tackled the time this way:

  • How do you tie into crowdsourcing? 
  • How do you see contributors benefiting?
  • How about the economics?
  • How about ownership and meaningful influence?

And the takeaway? Our closing point was:  if you get others’ data, use it only for the intended use case. And as Megan reminds us, “be sure the intended use case is clear; “consent” doesn’t mean anything if people don’t understand what they’re opting into. And if it changes, that’s okay! Just let people know and require them to consent again.”

Personally I’m quite gratified we didn’t decide to unilaterally change the terms of service on our panel topic, either.

Corporate Surveillance

“So every purchase initiated or prompted by a recommendation you make raises your Conversion Rate. If your purchase or recommendation spurs fifty others to take the same action, then your CR is x50. There are Circlers with a conversion rate of x1,200. That means an average of 1,200 people buy whatever they buy. They’ve accumulated enough credibility that their followers trust their recommendations implicitly, and are deeply thankful for the surety in their shopping. Annie, of course, has one of the highest CRs in the Circle.

The Circle, p. 252

In May 2017 I invited Adrian Hon, entrepreneur, author and futurist, to our speaker series at Mozilla. I’d read his book, A History of The Future in 100 Objects, after hearing him read from it at The Long Now, and I fell in love with the approach of giving retrospectives from an imagined (and well-informed) future vantage point. As we discussed what scenarios could be most relevant and meaningful for us at Mozilla, we decided to arrive on surveillance as a focus.

Specifically, Adrian presented from the perspective of someone in 2027 looking at us then (in 2017), in awe of how readily everyone accepted ongoing, intimate surveillance in the home through devices from Amazon and Google, after buckling so strongly in response to CCTVs in the 1990s.

It’s two years later and nothing freaky has happened with Siris or Alexas (yet). But we have gotten less trusting of surveillance in the home as we continue to learn more about the implications of our data; surveillance capitalism is emerging from academia to the mainstream.

That is, we’re more aware and (legitimately) wary of how our personal data can easily be misused in consumer free and commercial services. But what about surveillance in the workplace?

This screenshot is not from the Circle. It’s an enterprise tool licensed by major companies to encourage their staff to post content about their company on a business-social network. (UPDATE: looks like that particular surveillance tool was somewhat shelved and everyone I speak with at this business-social network sort of ducks when I ask about it. Perhaps the learnings are going into this effort – sort of like Facebook started with… Beacon).

It is opt in, and the marketing doesn’t suggest companies tie job performance to these dashboards (though…their client endorsers advocate using them to exert social pressure among teammates). And, in fairness, corporations have been able to monitor at least some of their staff’s activity for a long time (though…maybe not quite to the extent that technology offers today. At all).

As the past few years certainly remind us, just because you can, doesn’t mean you should. Guess we’ll see.

p.s. just came across this Quartz piece – from two years ago – lamenting the advent of corporate surveillance (and echoing my fears of Slack). la la la.

p.p.s. October 2020 about Amazon monitoring its workers.

p.p.p.s from November 2020: Microsoft (who owns the company referenced in the main post) is here for us.

UX vs. DX

I was introduced to Estelle Weyl through my colleague Ali, who suggested Estelle as a speaker for Mozilla’s speaker series. I was intrigued with Estelle’s teaching on the differences between how we as humans perceive the speed and performance of our web browsers (vs. the precise, technical “reality”).

She was of course great, and her final slide also called out another important distinction:

So how fun was it when, a bit over a year later, she invited me to moderate a panel on, yep:

https://forwardjs.com/schedule

We had Tomomi Imura of Slack on board, and were super fortunate to recruit Sarah Federman (newly) of Atlassian and Jina Anne.

This group was so amazing, they agreed to meet on a holiday before the event to huddle. It was there that the subtitle emerged:

We realize we hadn’t intended it, and while we didn’t want to make the Lakoff mistake, we did think it was cool.

So, whiskey it was.

Oh also, the conversation was as great as these women. Estelle and Timomi had previously posted different ways to tackle this. Estelle defines DX as “the methodologies, processes, and tools (frameworks, libraries, node modules, pre- and post-processors, build tools, other third-party scripts, and APIs) by which developers develop web applications for their user base.”

And, because developers are often users too (think developer tools, and of course, frameworks), Timomi approaches their DX in a way that exhorts developer tool makers to keep the developer experience – as users – in mind.

So as a group, we broke this down further, looking at why some developers may be tempted to not think about the UX (whether those users are developers or not, per above), and instead adopt a “resume-driven development” approach (h/t Estelle again) that favors them showing off knowledge of sexy new frameworks vs. delivering a solid UX.

There are also work culture pressures to deprioritize UX. Ship fast or first or cheap, user-be-whatevered, can be a hard force to combat when it comes from management.

But, as others pointed out, developers can still make the choice to not be overly-reliant on tools or frameworks so they can choose the best route for the end-users. Individual engineers can ask forgiveness vs. permission in adopting a user-centric, front-loaded design approach from the start. Finally, to steal (again) from Estelle:

Taking the time to do it right the first time is “fast to code”. Refactoring is not. If you keep the six areas of concern — user experience, performance, accessibility, internationalization, privacy, and security — at top of mind while developing, your finished product will be usable, fast, accessible, internationalizable, private, and secure, without much effort.

Estelle Weyl