The innovation continuum

My deep curiosity has led to what I like to call an “unlinear” career path. But at last I’ve been able to identify a common thread: helping technical people build cool stuff. And if I’ve learned anything, it’s that innovation isn’t restricted to startups. And, that being a startup hardly guarantees you’ll do something innovative.

Also (per my previous post) I’m the weird outlier that gets a high degree of energy when I get to public speak (see? I even say “get to”). Recently I had the privilege to share about ways I’ve helped technical people thrive.

First was the Lean Startup Conference in SF this October. I had shared Mozilla’s cultural transformation story as part of our submission to Fast Company’s Best Workplaces for Innovators (spoiler: we got it!), and was keen to share this story elsewhere, too. I pulled in my colleague Matt who has led our Culture of Experiments program to tell our story of how we evolved from a culture that avoided data at all costs out of respect for user privacy, to one that embraced data in ways to deliver that privacy more effectively (deck).

Then earlier this month I had the enormous pleasure to spend time with with our team in Taipei to do a bunch of things. Admittedly helping the first-ever Firefox Run for Internet Health was a top-contender for memorable stuff I did (really, it was); but relevant to this post is the fun time I spent with the team at TechStars’ Taipei Startup Week. It was a great excuse to reflect on my own career to convey all the different ways that startups need “other” companies, and vice versa (deck). In short, I didn’t want the audience (entrepreneurs) to sell their value short simply because they are trying to survive (easier said than done).

ok not gonna lie, helping with the race was super fun

It was also great to learn from the folks there more about the local markets; in my case, I hung out with the breakout group to get up-to-date on Korea’s startup markets — certainly a revisiting of my roots from my first gig at Asia Pacific Ventures so long ago. Turns out their landscape is now not dissimilar from the U.S. in that a few central entities swallow up most of the smaller companies (in this case, the chaebol). But the funnel to acquisition is financed not by VCs and pension funds as in the States but rather primarily through governmental entities. From what I could glean, this could democratize things a bit more at the earlier phases. But, I’d love to dig in more.

Data By and For the People

I’m the rare human that loves public speaking. Yes I get nervous, of course, but I also get a huge charge out of it. So this Slack from my coworker Susy had a special amount of serotonin accompanying it:

I soon huddled with another colleague Rosana (like Susy, Rosana’s also far more familiar with crowdsourcing than me). I got into my best Michael Krasny consultant curiosity groove to beat back the imposter syndrome and hopefully helped in crafting the panel topic: Crowdsourcing: Data By and For the People, to be hosted fittingly at Mozilla’s Community Space in SF.

Fortunately the conference organizer Epi loved our topic, and we enlisted Megan to be on the panel, along with Christian Cotichni of HeroX, Nathaniel Gates of Alegion, and Anurag Batra of Google.

Since the link to our panel doesn’t include what we wrote up to describe it, I’m pasting it here so you can get a sense of what it was really about:

Per CSW’s website, by “engaging a ‘crowd’ or group for a common goal — often innovation, problem solving, or efficiency,” crowdsourcing can “provide organizations with access to new ideas and solutions, deeper consumer engagement, opportunities for co-creation, optimization of tasks, and reduced costs.” 

But is this a fair value exchange for everyone involved? The above solves a number of problems for companies, but do they help contributors?  And what role does crowdsourcing play in social equity?

As products and services increasingly incorporate Artificial Intelligence (AI), crowdsourcing has a critical role to play in ensuring new technologies and algorithms serve society equally. To quote The Verge: “Data is critical to building great AI — so much so, that researchers in the field compare it to coal during the Industrial Revolution. Those that have it will steam ahead. Those that don’t will be left in the dust. In the current AI boom, it’s obvious who has it: tech giants like Google, Facebook, and Baidu.” If we build the next generation of AI apps using data from a few select players, we risk creating a world that serves the needs of a few corporate entities vs. the needs of all.

If we crowdsource data to train the next generation of AIs, we stand in a much better position to deliver products and services that incorporate the needs of many vs. a few.

This panel will explore how different organizations are approaching crowdsourcing, and dive into the specific implications around rewarding contributors, and the social responsibility of organizations who use crowdsourcing. 

We organized a prep call which went great – we got into some of the thorny topics, surfaced some healthy panel-bait discomfort. But by far the most memorable part was at the end, when, one of the panelists (we’ll let the reader guess) announced s/he had to “go to another part of campus” and “just wanted” to say that the published topic – the one that we just prepped for, Crowdsourcing: Data By and For the People – really shouldn’t be about ethics at all, because nothing really “goes anywhere” from ethics discussions. Instead, we should delve into the “intricacies of crowdsourcing itself.” 

Just before s/he then dashed off to grab a campus bicycle, I reminded the call that the organizer loved it, and I was super grateful that another panelist chimed in to say the topic was precisely why s/he  agreed to be on the panel.

I quickly developed a strong energy for day-of-show.

And it went fine; granted, we were one of just a few panels that weren’t in the main building so: away from all traffic, and, at the tail end of the conference at 3:00pm on a Friday. So we were heartened by the ten or so folks who did show up and listened attentively.

We tackled the time this way:

  • How do you tie into crowdsourcing? 
  • How do you see contributors benefiting?
  • How about the economics?
  • How about ownership and meaningful influence?

And the takeaway? Our closing point was:  if you get others’ data, use it only for the intended use case. And as Megan reminds us, “be sure the intended use case is clear; “consent” doesn’t mean anything if people don’t understand what they’re opting into. And if it changes, that’s okay! Just let people know and require them to consent again.”

Personally I’m quite gratified we didn’t decide to unilaterally change the terms of service on our panel topic, either.

Corporate Surveillance

“So every purchase initiated or prompted by a recommendation you make raises your Conversion Rate. If your purchase or recommendation spurs fifty others to take the same action, then your CR is x50. There are Circlers with a conversion rate of x1,200. That means an average of 1,200 people buy whatever they buy. They’ve accumulated enough credibility that their followers trust their recommendations implicitly, and are deeply thankful for the surety in their shopping. Annie, of course, has one of the highest CRs in the Circle.

The Circle, p. 252

In May 2017 I invited Adrian Hon, entrepreneur, author and futurist, to our speaker series at Mozilla. I’d read his book, A History of The Future in 100 Objects, after hearing him read from it at The Long Now, and I fell in love with the approach of giving retrospectives from an imagined (and well-informed) future vantage point. As we discussed what scenarios could be most relevant and meaningful for us at Mozilla, we decided to arrive on surveillance as a focus.

Specifically, Adrian presented from the perspective of someone in 2027 looking at us then (in 2017), in awe of how readily everyone accepted ongoing, intimate surveillance in the home through devices from Amazon and Google, after buckling so strongly in response to CCTVs in the 1990s.

It’s two years later and nothing freaky has happened with Siris or Alexas (yet). But we have gotten less trusting of surveillance in the home as we continue to learn more about the implications of our data; surveillance capitalism is emerging from academia to the mainstream.

That is, we’re more aware and (legitimately) wary of how our personal data can easily be misused in consumer free and commercial services. But what about surveillance in the workplace?

This screenshot is not from the Circle. It’s an enterprise tool licensed by major companies to encourage their staff to post content about their company on a business-social network.

It is opt in, and the marketing doesn’t suggest companies tie job performance to these dashboards (though…their client endorsers advocate using them to exert social pressure among teammates). And, in fairness, corporations have been able to monitor at least some of their staff’s activity for a long time (though…maybe not quite to the extent that technology offers today. At all).

As the past few years certainly remind us, just because you can, doesn’t mean you should. Guess we’ll see.

p.s. just came across this Quartz piece – from two years ago – lamenting the advent of corporate surveillance (and echoing my fears of Slack). la la la.