What we read this week (17 January)

Our favorite articles of this week. Have a great weekend.

Articles of the week

Week 152

A few thoughts on the current state of affairs in the debate about the NSA leaks by Ed Snowden and the reluctance and inability of Germany’s government do anything about the revelations.

Yesterday, Germany witnessed the debate between the two candidates to become the next Chancellor of Germany. Mrs. Merkel, the incumbent against Mr. Steinbrück, the candidate from the SPD, Germany’s largest opposition party today.

With that, we also witnessed the perceived state of the NSA / GCHQ / Snowden debate. Germany’s society made it painfully clear that it is not prepared to punish the ruling administration for not preventing the misdeeds that have been committed by foreign – and probably national – intelligence agencies. As Sascha Lobo noted in one of his many columns for Spiegel Online: what else needs to be revealed that would convince anybody that what has been happening should be punishable for all parties involved? All of what have been revealed so far should have been enough to get people in masses to the streets and should have been noticeable in polls for the upcoming in election September 22nd. Instead, we get nothing.

Obviously people have different priorities and I’m the first one to say that being 30 does make me less anxious about topics as the pension while others are more immanently affected by those. I know that and I’m trying to develop a sensitivity to accept that as a viable argument.

Yet, what I find myself thinking more often than not is the fact that financial security seems to trump any democratic principles and values. At this point, the free market mechanics managed to make people believe that capitalism equals democracy. In an economically shaken Europe, Germans seem to be content with the fact that at least in this uncertain times they are not measurably paying with their wallets.

Another, significant, aspect in this topic is its sheer complexity. I’m tremendously grateful to people like Ed Snowden, Glenn Greenwald and Laura Poitras. They are prepared to risk everything to guide us into the world of unimaginable corruption. While they are busy doing that, we need a media that is prepared to guide us to a point when we are not only talking about what has been happening, but also speaking up – clearly and unambiguously – about what needs to be done to stop this. We see little, if anything of this. Maybe that needs to change. Maybe the role of journalists in the 21st century needs to be more clearly articulating what needs to happen instead of document what has happened.

This is not something that should be taken lightly. It all starts with the “I don’t have anything to hide”-argument and some people might even think that in times of global unrest the things the intelligence community is doing might actually be a good thing. I don’t. And not only because I was born in a state that oppressed its citizens and minorities for the sake of the greater good while only few really profited from it. It’s also not because I live in a part of Berlin which only 25 years ago was still under the iron fist of the same regime, one that had no regard for its citizens, privacy or the right for self expression.

“The road to hell is paved with good intentions” said somebody a long, long while ago. While I’m not sure that we are literarly speaking about hell, one does not need to be a historic scholar to see many precedents in history in which excessive disregard for individual rights, privacy by the powerful few did not lead to the greater good. “In a democracy, the people get the government they deserve.”, said Joseph de Maistre. We, as society, might be getting what we deserved. Question is: is it really what we want for the future? If this is the state of things to be what do we want to leave the next generation with?

I’m writing this all while sitting in a train back to Berlin. It’s a monday after a 90 minute long debate between two people who are aspiring to become the most powerful person in the nation. A debate which had 4 moderators and in all of that we heard little more than a few questions on this topic without a clear, articulable message of what will happen. We heard a chancellor who full knowingly tried to reframe the discussion into being about something that it is clearly not and we heard a Mr. Steinbrück who seems to hope that this topic could cost Mrs. Merkel more points on election than it actually will.

On top of it all, there is an elderly women standing across from me in the train. Standing, not because they are no seats, but because she doesn’t dare to sit. She is afraid that she will be killed by CIA agents, she’s rambling about various known and unknown conspiracy theories, she shouts about her experience with the Stasi and how the world is unraveling. While she is clearly ill, she unfortunately doesn’t sound crazy at this point.

It is up to us all to do something. When we can go about our day, discuss new projects, discuss new opportunities, debate the flatness of design on iOS7, we also can debate and act and stand up to the people who fill that they are in charge now.

A good Monday to you all.

Week 140 + 141

While we are busy churning away multiple projects, Igor takes a closer look at an aspect of how to deal with the revelations made by Edward Snowden’s leaks.

Busy two weeks. We are dealing with multiple concurrent client projects and planing new ones. When a project got postponed, we used to fall out of productivity rhythm. Now, we just turn our head to the next big todo. We like how things are working out these days.

What didn’t help to stay on track where the revelations about the NSA. Those kind of stories are right up our alley. They a broad, many people provide valuable perspectives and we are very keen to understand it all. So here is our take on what is happening right now.

Revelations about the scale of the surveillance state have been making their rounds on every media channel. The leak by Edward Snowden seem to have started something that should have been going all along: real reporting on the issues. We’ve been seeing multiple articles emerge that look deeper into what seems to be an extraordinary setup by the US government to spy both on its own citizens as well as on the rest of the world.

We live, once again, in times in which the future that we’ve used to read about in science fiction books is not only upon us, but is even scarier.

But I want to focus on one specific aspect in this situation. Not many years ago, the most convincing argument why companies like Google can’t screw with privacy too much was: it’s very easy to switch from one service to another. Especially when it’s a search engine. As we can see today, this is not the case anymore. Many of the big players in the technology business managed to integrate themselves into our life in a similar manner as the financial institutions did. Technology is now also too big too fail. To switch away from Gmail, Dropbox and Facebook means severing oneself from an ecosystem that many other, small vendors use to make their product work. The APIs that we hailed as saviors of the open web, those who helped create those ecosystems in the first place, are now coming to hunt us.

There are, of course, alternatives to all those services. Open source alternatives and ones that provide the user with a lot more security and privacy. And yet, convenience and the existence of those proprietary ecosystems make it very hard for people to make the switch. This is both because the lock-in mechanisms have been designed to keep user in, but it also because the open, more secure alternatives aren’t making their argument in the most effective way.

For a long time, I have been a strong supporter of the open source movement, used to run my machines on Linux and kept away from proprietary solutions as much as possible. That changed a couple years ago. Not because my views changed necessarily. I just discovered that for this stage in my life, I want convenience and “just works” more.

The same applies to security. As an informed user, I can take care of quite of few precautions. I’m mostly only going online through a VPN, I have a Tor browser installed and I could reactivate my GPG key. But this is not a scalable solution. Even after the revelations about the extend of NSA’s capabilities to tap into our data, we will not see mass adoption for those security measures.

We obviously can not rely on our governments to protect us. We also can’t rely on the companies who host and own our data to prevent governments on accessing it as they see fit.

In a world in which we still need to fight arguments like “I don’t have anything to hide”, who will be able to provide both new questions and the ways to answer them that are adequate to the world that we live in?

What we read this week (15 Mar)

A web-based “brain” for robots, a disturbing culture revolving around hijacked webcams, the trickiness of making digital publishing sustainable for its workers, misgivings about Google Glass and a former Pixar employee’s storytelling tips.

Quote of the week

No work is ever wasted. If it’s not working, let go and move on – it’ll come back around to be useful later.

Emma Coats

Articles of the week

  • The Atlantic: A Day in the Life of a Digital Editor, 2013
    A long piece by Alexis Madrigal on the tricky state of digital publishing, in response to a similarly-titled post by Nate Thayer. Madrigal’s assessment: “So far, there isn’t a single model for our kind of magazine that appears to work.”
  • BBC News: Web-based ‘brain’ for robots goes live
    Rapyuta is a project that seeks to make robots smarter by freeing up some of their internal memory and giving them a central, online “brain,” or reference resource, to draw upon when they come across something new. There are many parallels here to the way we deal with unfamiliar situations these days – consulting YouTube, Wikipedia and Quora, for example.
  • Ars Technica: Meet the men who spy on women through their webcams
    A disturbing report on “ratters,” people who use RATs (Remote Administration Tools) to spy on their victims (“slaves”) by hijacking their webcams.
  • The Guardian: Google Glass: is it a threat to our privacy?
    Google Glass brings an element of uncertainty and distraction into human interactions, and raises even more questions than we already have about the boundaries of personal privacy. This article raises some interesting points as to how we could get around some of these problems, and in what situations society might object to this type of technology altogether.
  • Story Shots: 22 #storybasics I’ve picked up in my time at Pixar
    A list of tips that apply to much more than movie-making. (Many of them are in fact quite relevant for business consulting, among other things.)

What we read this week (25 Jan)

A couple looks at Facebook’s Graph Search, what the real problem with Google Now is, why “functional stupidity” is important, what should worry us about the future, and why “obscurity” can be a more helpful term than “privacy” when it comes to data.

Quote of the week

The digital environment is not a parallel or purely virtual world, but is part of the daily experience of many people.

Pope Benedict XVI

Articles of the week

What we read this week (22 Jun)

The reads this week revolve around changing web culture (memes, the Slow Web and auto-generated e-books), and the morality and usefulness of collecting data on people (open city data and database marketing).

Quotes of the week

To be human is to tinker, to envision a better condition, and decide to work toward it by shaping the world around us.

Frank Chimero

Articles of the week

  • New York Times: You for Sale: Mapping, and Sharing, the Consumer Genome
    A chilling read about the company that has more data about people than any other company or institution out there. This article comes at a time when a similar German company called Schufa had to cancel its foray into finding out how to add Facebook data into its database after a huge public outcry. As Sam Seaborn said on West Wing: “The next two decades are going to be about privacy.”
  • Jack Cheng: The Slow Web
    Jack Cheng applies the principles of the slow food movement to the web and describes an approach that values timely over real-time, moderation over excess and knowledge over information.
  • Smithsonian: What Defines a Meme?
    Great excerpt from James Gleick’s The Information about the definition and the history of the meme. Essential reading for anyone involved in communications and the spread of ideas.
  • The Pop-Up City: Data-Driven Urban Citizenship
    This article lists many examples of projects that demonstrate developments, benefits and potential problems in the usage of urban data.
  • TRAUMAWIEN: Ghostwriters
    Artist coders set up bots that gathered YouTube comments and compiled them into mindless but fascinating e-books, which were then sold through Amazon. Amazon has since deleted the books, which raises further questions about what can and can’t be considered legitimate in online publishing. Excerpts from the books, including the brilliant Alot was been hard by Janetlw Bauie, can be read here.

What we read this week (9 Mar)

This week we read about the flaws in pattern-based investment, the many, many companies tracking us on the web, and how to fight busy with busy.

Quotes of the week

In a connected world, the seams are the story.

Kyle Cameron Studstill

People love people who can make things. Making’s the new thinking.

Russell Davies

Articles of the week

  • Chris Dixon: The problem with investing based on pattern recognition
    Chris Dixon points out the logical fallacy in investing in patterns that have been successful in the past. Though the article focuses on startups, the basic idea really applies in a broader sense: pattern recognition is just one of several steps in the scientific method.
  • Undercurrent: Self-Interest And Collaborative Systems
    “… people must then learn to use new technologies in less self-interested (and more responsible) ways,” writes Johanna Beyenbach, and we couldn’t agree more. A nice post by our output-heavy friends from Undercurrent on collaborative systems, with nice real-life examples.
  • The Atlantic: How Google — and 104 Other Companies — Are Tracking Me on the Web
    Alexis Madrigal used Mozilla’s Collusion tool for 36 hours to find out which companies were tracking his browsing. The result: 105 different companies gathered data on him. From this surprising discovery, he dives deep into ethical questions of who is allowed to do what and for what price. A must-read about the current online advertising market and the implications for our privacy.
  • Maree Conway: Understanding Foresight
    Maree Conway gives a great introduction into the practice of foresight and provides a helpful framework for how to approach it.
  • Rands in Repose: A Precious Hour
    Rands, aka engineering manager and author Michael Lopp, developed his own way of coping with the mind-boggling, somehow seductive “busy” of work and everyday life. In this article, he discusses the benefits of breaking “the flow of enticing small things to do” and blocking off one hour a day, every day, to “build something.”

What we read this week (10 Feb)

A week that prominently featured outcry over how web services handle our data, a Q&A with Foursquare founder Dennis Crowley and some thoughts on the Death of the Cyberflâneur.

The reason the Web took off is not because it was a magic idea, but because I persuaded everyone to use HTML and HTTP.

Tim Berners-Lee about the social process of trying to get everyone to use the same standards

As the room lit up with projections of Call of Duty footage, Nyan Cat animations and sample-heavy bass, I couldn’t stop thinking that this show was among the signs that “Internet culture” is now just culture.

Anthony Volodkin about a recent Skrillex show

  • RWW: Foursquare CEO Dennis Crowley on What He’s Learning From Twitter and What’s Next
    Dennis Crowley speaks about the future of Foursquare and how his service will become more mainstream.
  • The Next Web: Path’s Address Book Mistake Shows an Apple Problem Path has been caught uploading their users’ entire address books to their servers as soon as one installs their app on an iOS device. The outcry was and is big. Rightfully so. Still, there is a larger issue to be discussed here. It’s how we started accepting privacy-related questions in a way that Facebook, Apple and Google want us to see them. Time to demand from the services we use to be transparent upfront about how they intend to use our data.
  • Pinterest is quietly generating revenue by modifying user submitted pins
    Pinterest, while officially still in beta, sees tremendous growth. Now they are even started earning money, which is not objectionable. The way they chose to start monetizing is quite questionable, though: They alter user submitted links to include referral codes, thus collecting affiliate kickbacks – without making that obvious to their users.
  • NYTimes: The Death of the Cyberflâneur
    We share Evgeny Morozov’s opinion on Facebook’s “frictionless sharing.” But services like Tumblr and Twitter make us think that the cyberflâneur is alive and well.
  • Slate: How the hot ad agency fell from grace.
    “I come to bury Crispin, not to praise it.” Slate’s Seth Stevenson, never a fan of the ad-world-darling Crisipin Porter & Bogusky, rips them apart for their work on VW and Burger King, which have dropped CPB in 2010 and 2011.

What we read this week (2 Dec)

Russell Davies & the post-digital, Gidsy could change things, why we are surprised about our lack of surprise about the future, a prototype worth a 1000 ideas, a retrospective on Zuck’s apologies are just a few of the things we read this week & highly recommend.

Quotes of the week

The best advice I could possibly give you, and forgive me if this seems glib, is to work. Work. Work. Work. Every day. At the same time every day. For as long as you can take it every day, work, work, work. Understand? Talent is for shit.

Barry Moser

A digital strategy is a plan to engage and empower networks of people, connected by shared interests, to satisfy a measurable business objective.

Bud Caddell

Articles of the week

Quantified Self Data & Privacy

Tracking our behaviors and our body data means tracking the most sensitive kind of data. This is the very thing the privacy debate is about: Health and location data. So let’s think about this.

This article is part of our series on the Quantified Self.

Quantifying ourselves means tracking the most sensitive kind of data: Our behavior and our location.

The conversations we have on a day-to-day basis about body tracking and the Quantified Self clearly show that most people are acutely aware of just how sensitive this type of data is. In fact, privacy implications tend to be one of the first issues to come up.

And this most certainly isn’t just an exaggerated reaction, but rather the sensible thing to think about. But let’s take it step by step.

Our most sensitive data

In theory, tracking and “optimizing” ourselves lead to better life decisions – in other words, to be more ourselves. However, disclosing exact data about our bodies and our whereabouts makes us vulnerable. The potential for abuse is immense.

Now there are several aspects to look at:

  • What kind of data do we capture? This is largely determined by the types of services and devices is user.
  • Where is the data captured? In most cases these days our data sets are stored in the cloud, not locally. This makes it easier to handle and backup, but also more hackable and commercially exploitable. In most cases, the cloud is the right place to put this data, but I’d imagine there’s a business case to be made to allow users to store data locally. Some people might even pay a premium.
  • How do we share our data? The spectrum ranges from publishing our data sets in full, publicly and non-anonymously (this is roughly where Foursquare is), to highly anonymous aggregated data (medical data). More on that later.

And of course: Who is interested in our data?

Who wants our data?

There are quite a few players out there for whom our data is highly valuable – often in a straight-forward financial way.

Marketing departments are obvious in this context, as behavioral data creates opportunity to target potential customers, and build relationships. This could be used in “white hat” marketing, ie. in non-critical ways that actually creates value for consumers. It could also be done “black hat”, ie. in abusive ways. Think data mining gone awry.

Researcher of all flavors are interested in the kind of data sets created through self-tracking.

Governments might be tempted by location and mobility data, and try to match social graphs, location overlaps, group behaviors.

Actual is not normal (a tribute to Edward Tufte) Image by Kevin Dooley , licensed under Creative Commons Attribution.

Insurance companies might see behavioral data as gold mines. Depending on your country’s legal and social security framework, health insurances might charge customers differently depending on their fitness regime, or smoking behavior, or regularity of their heartbeats, or the number of drinks per week, or even the numbers of bars visited. Maybe the types of meals eaten and calories consumed, or body weight. In this particular context the possibilities for use and abuse are endless.

Which brings us to…

Trust

Where we deal with sensitive data, trust is key. We, the consumers & users of web services, have collectively suffered privacy missteps by internet companies and over-zealous startups over and over again. (I’m looking at you, Facebook!)

While many of us have gotten used to sharing some aspects of our social graphs online, behavior data might be a different beast altogether.

This isn’t just a matter of degree, either. Here we have such clear abuse scenarios that insisting on control over our data simply becomes commons sense.

As researcher Danah Body states (highlights are mine):

“People should – and do – care deeply about privacy. But privacy is not simply the control of information. Rather, privacy is the ability to assert control over a social situation. This requires that people have agency in their environment and that they are able to understand any given social situation so as to adjust how they present themselves and determine what information they share. […Privacy is] protected when people are able to fully understand the social environment in which they are operating and have the protections necessary to maintain agency.”

Take Facebook for example. Recently, the company introduced what they call “frictionless sharing“. What that means is that all kinds of apps and services share your activity on Facebook – which song you’re listening to, which articles you’re reading, what you comment on etc. While the announcements drew quite a bit of criticism, we can only assume that the increased sharing activity will serve the company’s goals well: It will create more engagement data, at the cost of privacy and control. In other words, at the cost of agency.

What this means for companies operating in this field is this: It must be absolutely clear that they never, ever share your behavioral data with anyone without your clear consent. More bluntly: If you don’t actively share your data with anyone outside the company, they must not do it.

Here I’d even go so far as to suggest thinking about worst-case scenarios: Maybe it even makes sense for some companies not to even store your data but instead save it on the client side, so that they could not even be subpoenaed into giving up user data.

Do I even have to mention that a strict and very easy to understand privacy policy is a must?

So, now that we have reduced the potential for abuse a bit, the next question is…

Who owns our body data?

Now here’s a question that’s both very simple and incredibly complex. As a guideline, the ideal we should always strive for is: We do! Nobody but ourselves.

However, it’s of course a bit more tricky. The service provider will need some of the data for their business case. Expect not to get anything for free. As the old internet proverb goes, you either pay or you’re being sold.

So we have data ownership and usage rights on one hand, and then we have data portability.

Let’s say we upload our running data into Runkeeper, track our meals with The Eatery, our sleep patterns with the FitBit or the Jawbone Up, and our social life through Foursquare. That’s already quite an array of services for even a basic tracking setup.

If history has taught us anything, then it is that web services don’t live forever. So we need to be able to get our data back when we need it. Better still, we should be able to move our data sets from one service to another, combine and mash them up, and allow different services to access our data in ways we can easily control. Easy is key here as we move towards mainstream adaption.

Connection Problem Image by Peter Bihr

A simple data dump won’t necessarily do – the data has to be structured, maybe even standardized. Only then can we use it in new, interesting ways.

Sharing ourselves

A hypothesis: Collecting behavior data is good. Sharing behavior data is better.

Bigger data sets allow us to derive more meaning, potentially even to create more data from what we have. The kind of data we talk about becomes immensely interesting once we start thinking in terms of scalability. Think two people comparing their fitness data is cool? A billion people comparing fitness data is cool!

To protect the individual, aggregated and anonymized data is the way to go here. Aggregated data sets still allow us interesting correlations while providing some level of protection. Although even aggregated data sets can be tricky: A study found that 87 percent of the people in the United States were uniquely identifiable with just three pieces of information: gender, date of birth and area code.

Sharing isn’t a simple process, either. As Christopher Poole pointed out in fantastic talk, most current models of online sharing assume that you have one identity and that users just need to be able to determine which bits of information to share. In reality, though, it’s much more complex. Our identity online should be like in the physical world – multi-faceted, context-dependent. It is not, in Christopher’s words, who we share to, but who we share as. This is hard to put in code, but it’s important that we think about, and hard.

Context is key in sharing. I might no be willing to publicly share my brain activity, heart beat and genetical information. However, I might be very willing to share parts of either of these with my doctor while in treatment – as long as I can be sure that the doc won’t pass it on to the insurance so they charge me extra for higher-than-average genetical chance of some kinds of cancer or Alzheimer’s. Today, most doctors or even larger clinics aren’t even able to make use of the type of genetical snapshot commercial services like 23andme, although this might change over time.

A duty to share?

In a radio interview recently we discussed privacy implications of the Quantified Self in general, and of DNA analysis in particular. What is safe to share, what is reasonable to share?

I’d like to flip the question around: What is ok not to share? Maybe we even have a duty to share?

Think of the medical research that could be done, and the treatments that could be found, if more of our behavioral data was openly available. If even just one major disease could be treated more effectively by discoveries made through body tracking and our shared data, would that not be worth it?

It’s a question we can’t answer, but I urge you to think about it. Maybe it’ll make you want to track and share some more.

Until then we encourage you all – both users and producers of Quantified Self services – to pay privacy implications the attention they deserve. So that at some point we can stop worrying and start building stuff that helps us be more ourselves.