Bits

Mobile is not a neutral platform (yet) and that is a great thing for startups

As I first read this superb piece on how mobile is not a neutral platform, my conclusion was that it should have been titled mobile is not a fully baked platform in that the central argument is: unlike the web of yesteryear, in the (mobile) Internet of today, the mobile OS vendors are still fighting a battle of differentiation by adding bits and pieces to their respective platforms that change fundamental aspects of both the user experience and how users discover and engage with new apps.

This was also true of the web of course: MS IE had all sorts of "innovative" ways to help users navigate, discover, and engage with publishers and Netscape gave chase in some cases and tried to chart their own course in others (anyone remember SmartTags?) However the key difference is that the web phase settled down with the collapse of the dot-com boom and so depending on how you measured it, 4-5 years after the platform differentiation phase had begun, it was over (thus paving the way for Google to grab ahold of control of distribution through better discovery). As Benedict points out, we are nowhere near this point in smartphones and quite unlikely to get there, whether we see an economic correction soon or not. There is simply far too much money in owning this much more personal platform for Apple, Google, the Android OEMs, and even Microsoft to stop trying.

But how does this affect the typical startup trying make its way in this world? The real question coming out of his analysis for me is: is it better to be a startup with a service or a product in this world of 2015 with the OS vendors constantly shifting the landscape under your feet as you try to gain purchase on the world or was the post-2000 period of the stable web/browser better? (putting aside of course the 2.5 billion extra people that are addressable which would otherwise overwhelm the calculus in the positive towards the present)

Short term what the mobile OSes are doing is breaking the traditional tradeoff between richness and reach of the user experience with each innovation: whether it is biometrics, 3d touch, AI, peripheral wearables or any further innovations we can't yet envision, entrepreneurs now can deliver much richer experiences without having to worry about how to get distribution in the face of platform vagaries _(Windows X.X with + Pentium of at least XMhz + broadband of at least X Mbps was all too common a mess for the user to parse pre-web browser ubiquity). So, win for creative entrepreneurs, at least as long as these new innovations are quickly followed with a third party API.

In the mid term it is hard to see how 2-4 companies attempting their own things and the volume leader being dependent on an ecosystem of hardware makers desperate not to be relegated to just that won't create some of those same DLL hell dynamics that affected the tradeoff, though Google seems to be intent on abating its unintentional contribution to that messy future. Still, with even just two targets across a dozen relevant geographies, we're going to go through a period where large development teams will have to maintain several codebases with relatively different user experiences in order to get the reach. So, win for those with capital and resources and relative loss to the tiny guys modulo the fact that the audience on each platform is large enough to start by just targeting it.

And long term of course, we're likely to see this platform level out like the browser did the web in 2000. It's just too unlikely that a common runtime won't win in the end, once the features that users care about with respect to this wave are settled upon. However, it may not be all roses for startups then either. As Benedict points out, if the platforms have by then fully subsumed app discovery and user acquisition, we may end up looking at a world that looks more like the cable industry or telecom pre smartphone except with a handful of new names at the top.

I personally am thus very excited about the prospects for the short term and startups. After all, the more the landscape shifts the more the furry mammals can scurry about in the hopes of finding higher ground.

Garbage trucks and the death of EMC

No doubt this week's acquisition of MA's largest tech company, EMC, by a financially owned and operated banana handler from the last century (Dell) was a bad day for the Boston technology ecosystem. Not only should we expect to lose a whole bunch of jobs but over time there is little doubt in my mind that "EMC east" will become yet another satellite office to a company whose main decision makers will live and work elsewhere. And on top of it, this is one less acquirer in a ecosystem that is full of infrastructure startups, most of which would have hoped to have EMC as a bidder at some point, even if only to create a real auction.

The deal rings of another one of these mega mergers: HP and Compaq in 2002. Back then, Sun's outspoken CEO, Scott McNealy described it something like two garbage trucks crashing in the night which was both funny and sad (and then a little ironic when a handful of years later his company had its own two truck moment with Oracle).

I am not sure whether this new merger will work any better than the aforementioned technology "big deals" have. What is most remarkable about the whole thing though is how the pace of collapse among the large big tech companies seems to be accelerating. Just this year HP, sucking wind and up against the ropes, decided to split itself up in a desperate attempt at being more nimble (having worked there, I am highly doubtful it is going to work). And all of this amidst the backdrop of Amazon— a retailer of all things— announcing a phenomenal rate of growth in their cloud business.

It's sort of cliche to declare that the rate of change is accelerating in the technology industry but nowhere does it seem more obvious than in the creative destruction that the combination of the smartphone and the cloud seem to be wreaking on all of these previously unassailable tech giants.

The conversational interface in its ghetto

As a relatively late early adopter to Amazon Echo, I have to admit that the product really does feel as per Arthur C Clarke's quote ("Any suitably advanced technology looks like magic"). Much more than Nest or WeMo or August Lock or even Sonos, watching the overgrown Bluetooth speaker point its Cylon-like light at wherever you happen to be talking from and immediately reach back into the cloud to recognize what you are saying and then serve it out of one of a few deep repositories (some of them transactional) makes it feel like it represents most of the interfaces of the home of the future today. And what is most amazing is that as a user, you don't miss the lack of a screen at all.

At $180 it is too expensive but as with the Kindle hardware, there is no doubt the price will come down over time (having already come down 10% in 10 months at what must be minuscule volumes). More to the point, most of the smarts are living and benefitting from cloud economics so the overall cost of delivering an improving service will also come down in price.

Here is the catch (and the reason why I held off for so long): with the Amazon Echo, you've got the future in your kitchen but that expansive, mind bending future is ghettoed to whatever services Amazon provides and/or is willing to partner with. Thus, no Spotify. No Apple iTunes and no Sonos. If you want to buy something, it is Amazon or nothing and there is zero Google in any of the services (who does traffic today without going to Waze?!?). This is a terrible future— one where every service is a silo that requires its own hardware for access and provides a limited view of the world— just ask anyone who has done the Name-Your-SetTop-Box hide-and-seek game of searching 4-5 services for a particular thing to watch. Except that in that case you do have a screen to get feedback— in the case of the Echo, it's just a disembodied 2001 voice telling you that it either didn't understand your request or that it simply isn't available.

The glimmer of hope, and what convinced me to give the Echo a shot, is the API (which is cleverly thought out). Let's just hope Amazon tries the whole partner broadly and listen to the users strategy rather than making this tiny and awesome bit of future be the victim of their strategy tax.

Whiffing it with the Apple Watch API (even 2.0)

In looking at this infographic on the advancing power of various iPhones over the years, I was struck by something completely orthogonal: apparently the Apple Watch is as powerful as the iPhone 4, one of my very favorite phones in the whole line (the first retina display!). In light of how disappointing the apps that were ready at the launch of watchOS 2.0 were this week, it feels like there is a lot of CPU being wasted.

The more I use my watch though, the more it strikes me that the "app model" is entirely the wrong approach to a wearable on your wrist. The only app I want to be "native" is a podcast/media player and that is for the purposes of running without an open face sandwich strapped to my arm. Otherwise I suspect the complications (the equivalent of desktop widgets) are more than enough of an API for third parties looking to hook into my attention.

Because at the end of the day that is what the current Watch does best: allow notifications to be less intrusive if one feels compelled to take the phone out to handle them. I don't buy that they can be more intrusive given how easy the Watch is to put into DND (do not disturb) mode.

So rather than waste the CPU on aping the phone app store model, what could be done with original thinking on the API? The world's best wearable— always collecting data and doing a bare minimum amount of machine learning to understand your activity, your health, and your intent over the next couple of hours— all with no cloud and no phone. This seems like a much more worthwhile task than pretending to be a small smartphone on your wrist— which incidentally is why I just don't get all of the people who "can't wait" for untethered watches.

It would take real leadership to admit that the native app focus was wrong and to completely retool the API but if the Watch is going to be its own thing this is what it is going to take. Let's see if Apple is up to the task.

In 2015 great products can no longer be sold without decent web services included

The new iPhones are out and they are as good as the early reviews indicate. Or I have to hope they are because I am sitting here wading through hours of CPR and other life saving measures to iTunes and my local iPhone backups before I can try the new phone.

Why?

Because Apple is so bad at web services that the only way to transfer all of the activity/health data from the old phone to the new phone is to go through iTunes, and more specifically through a backup process that was designed 8 years ago and updated with patchy hacky shit along the way. It was a great blast from the past to open the app after four years and have a back in time moment to what I was listening to and watching in 2011 but at this point, and as part of the most premium smartphone out there, this is just embarrassing.

And before anyone thinks it: it's not about the privacy and security of your health data (this can easily be solved with encryption). It's about lazy product management and shipping stuff too quickly. And though iCloud backups are an option, the backup/restore process there is so slow that I'm convinced no one really uses these for anything other than believing their stuff is protected.

In previous years, the lack of web services at the core of the product experience was okay for two reasons: most people would switch whatever they could to independent services (Spotify for music or Dropbox for file sync) and many more things were presumed to need to PC for setup. But it is 2015 and I expect that this is going to come bite Apple sooner rather than later if they can't fix it. Maybe it's time to spend some of that hard earned cash on the balance sheet of acquiring some of this DNA rather than moonshots that are going to reinvent the automobile industry.

History doesn't repeat but it sure does rhyme

Apparently most people rely on three native apps on their smartphone. I guess that means mobile is over and we can all go figure out whether it is VR, AR, or sensors plugged deep into the human cortex that will represent the next platform, right?

The long view (not to be confused with a short distance) of the smartphone has always indicated the path forward is the web with its increasingly capable runtime and "applications" that are loaded from a URL and implemented in Javascript. However over the last eight years this bet would have ended you up on the short end of the stick— UI performance, network bandwidth, native device capabilities and discovery for normals (a.k.a app stores) all conspiring to keep that long view a very long distance.

But there might just be cracks in the armor forming around the native app experience. In a history doesn't repeat but simply rhymes moment, the volume player in the US is currently under investigation from the FTC for tying things together, not the least of which are native app versions of search, maps, and the app store. Meanwhile ad blockers in the native browser, generally seen as threatening to the future of the web on mobile, may have just cast a bright light on the needs/incentives of publishers looking to reach users on mobile in a way that leaves some amount of room for a possible business model. And as a backdrop we've got a smartphone being released today that rivals laptop performance of just about 1-2 years ago ushering in the opportunity yet again to waste MIPS in the name of developer productivity.

Wrap that all in a bow of discontent over the winner take all dynamics of both of the major app stores today and you might just get to a enough of a disgruntled state to have the web do what it did on the desktop: provide a new common runtime that lets developers and service publishers feel like we are far from played when it comes to the open web on mobile.

Oculus Connect and its Developer community

Being at Oculus Connect 2 (their annual developer conference) was super exciting, not just because of all of the announcements (and there were plenty of them) but because it is really amazing to see how a developer community comes together from the very beginning and matures over time. 1500 people showed up in Hollywood for this 2 day show and they came from all over the world, almost all very excited by the prospects for VR and what they will be able to do with it in their respective spheres.

Given the number of startups that desperately want to build developer communities, some non-obvious observations from what I've learned having been a part of this journey (though peripherally as an investor) since the beginning:

  • it's not about the money: too many people want to make it a simple game of economics but the reality is that the first 1,500 people will come to your party in order to be a part of the future. Atman's keynote is worth watching here— he had a very sweet description of this when he told a story about an old boss telling him "These are the good old days."
  • instability will be tolerated: there were jokes about the dark days of the Oculus SDK on PCs going through display hell and people actually laughed. It's a community on the bleeding edge of the future after all and it very much feels like instability is much preferred to poor communication and a lack of transparency
  • it can feel intimate for a while. At 1,500 people and with some big name publishers, one would think it would start to feel like the typical scaled up tech conference but OC2 still feels like a place where anyone can ask anything of anyone else and not fear the RTFM type responses which can become common in the larger communities

What does a programming with native support for deep learning look like?

The frenzy of deep learning (multi-level neural networks) has reached a new level during this trip to Silicon Valley. It has gone beyond automatically want to fund to it is in the water we drink and the air we breathe out here which makes me wonder when the investment backlash for all of these "2% better on X test" is going to come.

I am convinced of two things though:

  1. If you are a startup, your dataset almost matters more than just about anything you will get from a deep neural network. I'd rather be Uber (or Etsy or Evernote or Spotify) and have billions of data points and the need to find even more data scientists and algorithms than be a team of data scientists looking for both a problem and a dataset to build a business around.
  2. Interestingly enough, I do think that deep learning is going to be in the water we drink and the air we breathe, at least when it comes to being a working programmer working in the domain an increasingly connected and complex world.

A very smart friend today provided me with a new lens to look at these quirky multi-layered neural networks through: he claimed that they are the Great Unification of Statistical Computing in that they should, over time, subsume Classic Machine Learning by being more generally applicable to a wide class of classification and prediction problems.

If this is true then I wonder how long it will be before we get a programming language that supports deep learning methods natively. I'm not exactly sure that this won't be a really well thought out library or framework for your favorite data crunching language but to continue the thought experiment for a moment: what if along with lists, hashes, strings and number types, we got a DNN data type that could somehow easily encapsulate the setup of a deep neural network and then a series of methods to perform the first 80% of necessary steps to train it. Tweaks could be made with extensions to the basic methods for training and classification could then be done by calling still more methods.

To take it one step further, such a programming language might completely abstract where any particular step runs (server, client, GPU, etc.) and make sure that the models, when big enough, can get moved between computers completely transparently.

I need to dig into the problem domain to see if any of what I just wrote remotely makes sense. In a sense I wrote it here at the beginning of that journey to serve as a record of how I might want this to work before pesky implementation details get in my way. Because I sure would love to have deep learning be this simple for me.

Interface innovation is now a Big Boys's Game

Contrary to my initial opinion that 3D Touch was the right button on the early Windows mice, the reviewers all seem to see it a big step forward (but be warned: the utility functions of reviewers and normals rarely meet). Enough of them have claimed it now that I am beginning to wonder if I should reserve judgment until I actually get my own iPhone 6S+.

The sad thing in thinking this through when it comes to startups is that this level of innovation in integration is so far removed from what the typical startup can do. I don't mean build the eighth generation of a super complex device— that is clearly impossible— but rather have the resources to bring new interface modalities into existence like 3D Touch.

This was quite different on the playing field of PC software. There new entrants could use what eventually became the native GUI toolkits but they were also free to invent their own things and use those instead. And in fact in the early days of the Apple II Bob Frankston tells a great story of the crazy hacks they employed to basically build up the UI of VisiCalc to be performant enough.

I guess when all of the interaction is taking place on a block of glass, those days are over— at least for the little guys.

Long live bots, down with all of these bots!

In a harbinger of what is coming, Twitter has a problem with bots. Technical, business and social reasons are keeping the service from stamping them out and meanwhile people are being fooled by programs meant to ape the narrowest of human behavior online. And in fact as the default mobile native interaction model shifts to a messaging like interface, it's quite likely that bots will mediate more and more of our interactions online.

There will be many that will try to serve useful purposes like the AI powered personal assistants at X.ai. And for a while, these bots will serve their owners by sometimes taxing the poor humans on the other side who are expecting something a standard deviation to the right of the Turing line when it comes to intelligence. But then these suckered humans will get their own bots as well— after all, who wants to waste time talking to a computer? And eventually it will be just bots talking to bots, negotiating on behalf of human in seemingly simple but ultimately complex and fractal exchanges.

Need a job? Use your bot to apply (it will talk to a job posting bot). Need a quote from a contractor for your business? Let your bot talk to the supplier's bot, slowly disclosing details and willingness to pay/provide the service over a series of interactions. And while this will seem like an artificial waste of resources at first, it is likely to be the ever decreasing percentage of humans in the middle, sometimes fooled, sometimes knowingly participating that will keep the system going under the pretense that some human somewhere is actually participating instead of simply configuring a bot and setting it free. And even that may eventually fall to a bot which will do its own bot spawning across the Internet towards its own ends.

Bots FTW! (Just make sure you are ahead of the pack enlisting your own).

Decentralization, the web, and programmability

This piece is fantastic. It takes the topic du jour (ad blockers) and weaves a great argument about how ad blockers are an example of a very powerful pattern in giving power back to the edge of the network (the user in this case) as opposed to the big companies running the increasingly centralized services that we depend on. It covers Robert Reich's somewhat alarmist editorial today on how big tech companies (Google, Apple, Facebook and Amazon) have aggregated too much power and should thus be regulated.

These businesses are all examples of network business models which of course implies centralization of power as the business ascends so I'm skeptical that regulation outside of the AT&T negotiated monopoly approach would do anything to significantly affect their power (and that was done when the company was being set up not retroactively)— at least anywhere near as much as obsolescence will in due course. But Albert's piece makes a much more subtle (and powerful) point: in talking through the power of the user-agent as a way to filter content, he is really referring to the web as a programmable medium, a point he hammers home when referring to the limits of mobile native by comparison:

The reason users don’t wield the same power on mobile is that native apps relegate us endusers once again to interacting with services just using our eyes, ears, brain and fingers.

So true. I remember the lightning moment for me with the web was in 1998 when I realized that with a Perl script and passing knowledge of the Alta Vista keyword parameters, I could get structured data about the links pointing to any URL in the index of a program that didn't even know it wasn't even being called by a human. The API was (mostly) self describing and the uses were endless.

We've lost that in mobile (reverse engineering of APIs notwithstanding) and unless we get it back somehow, I am sure innovation will slow to say nothing of the power that users are giving up, especially in this age where more and more people are taking an interest in programming.

The iPad Mini is a dead man walking

Ars has the iPad Mini 4 review I've been waiting for and the sad thing is that, just like last year, the machine is totally underwhelming. While the screen has been upgraded, it still lags way behind the one year old iPad Air 2 in performance. It's almost like Apple has been putting the B team on this device for a while now— as evidenced by the fact that it got less than 30 seconds of stage time at the announcement.

The obvious reason for this is that the larger iPhone have eaten its attack surface in the market. However I think that is the facile out given that the Mini still has 2X the screen area that the biggest iPhone can offer. I'm pretty sure that though it may not be for everyone, there is definitely a set of users for whom the small tablet form factor makes sense. It's the size of paperbacks and after all, these devices are still primarily for consuming content which means that there is a primary use case around reading books if nothing else.

Having said that, it feels like Apple's heart has never been in this small tablet. It started with [an executive looking at competitive pressure which is never the place to start building a new product. More to the point though, the new iPad Pro feels like the next turn of that particular screw: taking aim at the competitive offerings that are trying to combine the laptop and the tablet into something that can be used for work and fun at the same time.

I worry about that.

Adblockers and the Moral Compass

I've been watching this very public reversal on the ad blocker Peace by my favorite podcast client author with great interest. Personally I am not sure where I stand on the whole ad blocking thing; while the best argument I've heard against them is from my partner Josh who likens ads to micropayments for content in this pithy tweet, they have also become next to unbearable on network constrained touch devices. Publishers gotta eat and all that but users are being punished for the fact that mobile ads still haven't been figured out.

What I find more interesting about the pulling of the ad blocker (which I bought) is the online reaction of people who feel that they are somehow being duped by Marco for buying something he pulled. While he could have shown a little bit more foresight in launching an ad blocker, it feels like most of the pundits forget the sheer pleasure an engineer gets from working with something new and figuring stuff out along the way. This is the intoxicating high often labeled as "hacking on something" and can often keep one from thinking things through.

However the moment he realized it didn't sit well with his moral compass, he pulled the app. Full stop. Not only is this his right but also something we should be applauding instead of criticizing. All too often we who live in the startup world have to content with people whose ethics bend towards the gravitational pull of self interest and this willingness to forego the fruits of his labor is a welcome respite from that.

The Anticloud bets

At the risk of being contrarian and wrong (just another form of dumb in my book) I do often wonder whether there isn't a distributed systems bet that is the anti cloud hedge in the next generation of computing. So while Amazon stores more and more of the data in the world and delivers most of the MIPS in a metered way (or by the request with Lambda) and client devices become less stateful and more reliant on the network, someone somewhere figures out some applications that actually benefit from being totally distributed.

The classic two cases have been those that solve for network latency (storing big files closer to where they are going to be consumed and user control (or government control in the case of China. The first feels like a moment in time as both wired and wireless broadband gets faster and the second is likely to persist for a while, at least while governments continue to dictate Internet policy. My favorite latest project here is Zeronet— though admittedly still far too geeky, it seems like the right was to deal with censorship.

Earlier today my friend Andy mentioned another anti-cloud bet when I brought this up: tracking the state of many users in fully simulated world. In a limitation that I did not know about, most high performance game servers seem to top out around ~150 players. If this is true, there may just be the first reason to create a giant highly performant distributed state machine that could track infinite virtual worlds without needing commensurate Amazon bills.

Worth thinking about, especially as we are currently barreling simulated worlds from many different vectors (VR, IoT, 3D design just to name a few...)

Technical secondary education: about time

I found this piece in today's Times a welcome policy of the part of New York city to set a clear goal towards getting every high school graduate a computer science education. I am hoping that given the 10 year timeline, this initiative survives the current CS fever that has been fueled by this technology cycle and that we keep at this long after it stops being cool.

We teach math and science well beyond what most people who are not scientists use in most folks's working lives so adding a bit of CS in a world increasingly facilitated by software will arguably be more useful.

The other thing the article points out worth noting is the bottleneck in getting teachers ready to teach the subject. This is no small feat, especially in the highly political (and unionized) environment around public school teachers. Hopefully creative incentive structures, technology, and social pressure will be enough to accelerate the spread of pedagogical knowledge. Because after all, 10 years does feel like a long time.

The race to the bottom of the mobile ad money pile

I've been watching the emerging ecosystem of mobile ad blockers with some interest in part because of the fevered pitch of the debate surrounding their ethics.. On the side of publishers and advertising people the claim is that ad blockers are tantamount to stealing money while industry argue that crazy load times and take-over advertising hurt the user experience to the point where the mobile web (the only web that matters) will be hobbled beyond recognition. They are both interesting arguments that really fail to get to the heart of the matter:

Mobile advertising just doesn't work for anything beyond causing an application download. I've long thought that this was about three things: what people want to do on their mobile device, better targeting, and the ad format.

The first I was clearly wrong about— people want to do all of the same things on mobile that they do on their desktop Internet even if this means researching a new car on a tiny screen full of pinch and zoom gestures. Five years ago I would not have believed it but the data is overwhelming: much like the best camera is the one you have on you, the best web browser is the one in your pocket.

Targeting too is pretty good these days with a number of players using a variety of signals to find people across the mobile spectrum of apps and the web and ensure that it is the same two eyes looking across time and devices.

It is the ad format then that is still woefully un-native for almost everything that matters. Apps can get cross promoted in stream as Facebook and Twitter have demonstrated but it is far from clear that anything else works at all— either for traditional recall methods or for any other type of incentivized action. And since the future of the web is truly mobile, this is a big deal for both advertisers and for the advertising industry.

Publishing to the Ether

This weekend I read a fantastic piece on why Twitter needs to relax the silly 140 character constraint and agreed with every part of it. Reading a long interspersed conversation (or publishing event) on Twitter is incredibly frustrating and the power of it as platform is that it has supplanted RSS as a quasi-mass market way to get distribution on posts.

How amazing would it be if Twitter went from short status update network to full on content publishing network a la say... Medium for people who had more to say than would fit in 140 characters? As I was pondering this, I saw this tweet which basically makes the argument for the platforms that drive traffic and give writers audiences virtually overnight, either algorithmically or through human curation. Whereas a decade ago, one had to gather audience an RSS subscribe click at a time (or an email at a time), the game is now all about getting natural distribution by picking the right platform— or leveraging Twitter, the only way to use a social network to avoid publishing to the deep dark void that is the Internet where no one reads your being insightful.

If Twitter were to do this I could for instance have published this there rather than having to link it in— or worse still go totally green screen retro and screen snap a raster image of the text like all of the cool kids do.

Short pedestrian thought on WatchOS2

Techcrunch had a relatively uninspiring piece on the future of the Apple Watch as enabled by watchOS2 which should allow the running of native apps. The piece touches on one interesting vector: taking vitals in rich ways through the API in order to monitor conditions. This is going to be huge. Not just because Apple will take care of distribution of the hardware but because developers will be able to leverage a common API for building all sorts of monitoring apps. And as the watch revs, there will only be more sensors to take advantage of.

Personally my short term interests are much more pedestrian is that I just want to be to have all of the media services I care about (Spotify for music, Overcast for podcasts & Audible for books on tape) to come out with apps that can pull the content from the Internet through the phone to the watch and intelligently manage a cache there. I worry a bit that the bluetooth headphone connection will be poor (it was during my one attempt at using it) but taking iTunes out of the sync equation feels like the first step to making the watch truly useful as a media consumption device.

Text and driving (the real solution)

While ordering a phone today, I got lost in AT&T's site around teenage texting and driving, It Can Wait and remembered how much I hate it when I find myself looking at any driver, but especially those riding around in MomTanks (X5s, Lexuses, Toyotas made for carrying full on soccer teams and often handed down to teenage drivers) carelessly texting away while moving.

For a long time I've felt that this is a problem that needs a technical solution, and hopefully one which doesn't require the car to drive itself. The lame apps throw up that Waze "Are you the passenger?" dialog which everyone bypasses, a solution I've always felt has more PR/legal value than anything else. And as a matter of fact, it is unlikely that an app vendor is going to solve this (in what would be an Android specific way).

But just as Apple took major steps to deter rampant theft with the auto lock features of the phone, both they and Google could implement OS-level solutions that would prevent app use while driving a motor vehicle, likely with little work. The combination of sensors on the smartphone should be able to tell when it is being used by the driver as opposed to a passenger in what would be a really fun machine learning project. And to boot, despite my general antipathy for big brother type stuff, it would be easy for the phones to broadcast potential violations of driving and texting to potential law enforcement in a way that could really curb the habit.

Today as I understand it, checking the cell data usage is common practice after an accident which helps with liability in the same way that the breathalyzer does. But this is a pocket super computer after all— shouldn't we be able to get a little more future to help out here as such?

Hardware as service

The other big piece of news coming out of the Apple event was the hardware as service offering presented by Apple's iPhone Upgrade Program where the company is using its balance sheet to make it easier for people in the US to get iPhones. Despite not generally liking the idea of credit for disposable goods, this one is a great idea for three reasons.

First, the high end of the US market is saturated and requires some prodding to stimulate more replacement. Making the latest iPhone cost effectively $3/day is a great way to do this.

Second, it will really remove the lock that carriers have had on the purchase decision via their handset subsidies. This was already underway for many other reasons but if I were a carrier, I'd be very afraid that this is ultimately more damaging to my customer relationship than the prospect of Apple launching an MVNO (Mobile Virtual Network Operator) ever was.

And finally, to the degree that the company is using its balance sheet to make any part of this work (unclear as Citizens Bank is currently involved), this is a great way to use that $200b on the balance sheet (I've long since thought their dividending it out is a horrible idea). It bundles in a very high margin service (AppleCare+), stimulates demand, makes the primary customer relationship really be with Apple, and leads to further strategic opportunities where Apple is uniquely advantaged.

It will be super interesting if they do indeed put the gas pedal down on more of this kind of go-to-market innovation.

Thinking a little more about interface innovation for touch

I was thinking today about yesterday's post re: Apple's seeming confusion about whether to truly be "touch first" or focus instead on keyboards or styluses for input purposes. Clearly the end goal is to make the iPad a more productive device for creating and editing content and in that vein a keyboard and stylus may be the way to go.

But an interesting question to consider would be: outside of these two tried-and-true methods of input and voice, what else could the company invent? Gesture recognition seems obvious and given that they own Primesense there is a good chance that will come (probably after getting into an Apple TV) but looking at my desk today with both an iPhone and an iPad on it, another idea hit me: why not use the small touch-sensitive screen of the phone as sort of trackpad device? And extending this more broadly, why not use all of the other Apple devices through Bluetooth Le connections as good for input— for instance a Watch as a gesture recognizer or an actual laptop for fully complementary keyboard and trackpad?

As we collect more screens and sensors in our lives, connecting them in this way may actually give way to much more interesting interfaces for input than we can think of by just bolting on the traditional stuff. And given Apple's unique advantage in getting all of these things to talk to each other, this might be a unique playground for them to try.

It's all about touch, right?

Many words will get blasted out to the Internet today over Apple's event and the easy thing to do will be to complain about the lack of true innovation that the company showed in all of its new products.

To me that is less interesting than the repeated chord of dissonance that the products and speakers kept hitting over the type of interface innovation that is going into both the new mega iPad and the iPhones. It would seem as though the primary input method that has carried the company for the last eight years, multi-touch, is now under assault from a number of different directions, two which strike me as fairly dangerous for a company that has lost its leadership in the form of a design and product centric founder.

The first is that complexity is being layered into the affordance of the "deep touch" (or 3D touch as the company markets it even when the executives on stage still refer to it as Force Touch). This contextual menu approach strikes me as such a power user's feature that I have a tough time seeing regular users even figuring out that it exists. Which of course will create a huge problem when app developers start putting stuff in the context menus that are not represented elsewhere.

In a way it is a replay of the one versus two button mouse design battle of the 80s: the story goes that for many years the Mac had a one button mouse because Steve thought the mouse was new enough to computer users that the two buttons would get confusing (the original Xerox PARC mouse had three buttons). Power Windows users would complain about the one button and its limits on productivity while the general population remained completely befuddled by Windows apps and their context menus.

The second interface dissonance that really confused the state of input was on the iPad Pro. It might be "touch first" (as the video took great pains to point out) but apparently the company has now sanctioned both the keyboard and a stylus (you can almost feel Steve rolling in his grave on this one). They are accessories and not included which probably means that they will be adopted slowly at first and supported by third party applications incompletely. And while third parties have made valiant efforts to bring both types of peripherals to market, they've always been janky and an afterthought and other than the fact that the keyboard is not powered and can be lighter, it was not clear what Apple will be doing to fix this.

Why go here at all? A rational person might assume that Apple is trying to kill the legacy PC with an iOS powered device that can be tablet, clamshell, and something new all at the same time. What I fear though is that they are doing it because they are confused about where the market is going and are thus trying to be all things to all people.

IoT fatigue

The NYT had a great piece over the weekend on the explosion of Internet-connected regular device that made a compelling argument for why we are effectively squarely in the middle of the gimmick phase of it with companies connecting a variety of devices that don't really merit being connected and the result being confused users and ridiculous displays of technology for its own sake being put on by retailers like Target. It's really a fun read.

To me the IoT in the home is really about being able to control major pieces of home functionality in useful ways, at first through the smartphone and eventually hopefully in concert and automatically based on variables like presence and time of day. There is also a small niche for being able to measure particular aspects of a home's existence: energy consumption being the big one a number of companies are focusing on though the benefits here accrue less to the consumer than to the provider as I'll explain in a moment thus really limiting the market opportunity.

However, beyond control of the big things: HVAC, lighting, media and entertainment, it seems like the main use cases are around providers looking to put what are effectively subsidized vending machines in the home— where the subsidy comes straight out of the consumer's wallet. Witness the excitement around the [Amazon Dash] (automatic reorder of consumer staples) buttons as a great example. Some of these may be useful to the consumer but are really quite valuable to the merchant, energy provider or insurance company. And as such they should probably come free for the end user.

For the rest, I expect a lot of IoT home carnage in the consumer market though I hope to be surprised by additional high value control use cases that are not clear as of today.

The big question of when on driverless

There has been as much hoopla about driverless cars coming on to real roads over the last couple of years as there has been about VR. In the previously mentioned Markoff book,Machines of Loving Grave, there is a whole chapter dedicated to how close we are to seeing them on real roads and the blogosphere has been buzzing about them. And yet, you'll get occasional pieces like this one about a fixed gear bike that can confuse the system which make you pause and wonder whether we are not as an industry committing the classic mistake yet again of confusing a long view for a short distance.

There are three classes of challenges that have to be surmounted: business model, regulatory, and technical. The first I have a lot of confidence in due simply to the aggregate amount of productive time wasted commuting that could be partially or fully reclaimed when the robot drives the car. The second will somewhat depend on the third but should fall into place absent some heinous lobby— and given that Detroit is even committed to the vision, it seems unlikely here.

It is the third class of challenge, the technical and engineering bits around getting a vision system that is reliably given the sheer heterogeneity of what a self-driving car can run into in the wild that seems most difficult to handicap. I have no doubt that at some point this will happen but would love to understand better whether this point is two years from now or twenty. Come t0 think of it, there are probably a half dozen industries which would be so radically redefined by a fully autonomous vehicle that knowing whether it is 2, 20, or some number of years in between would be of quite some value.

A peripheral is a hard thing to displace

I chuckled going through this review of the Moto360 versus the Apple Watch because it mostly misses the main point. The Moto is to my eye a more appealing watch esthetically that is priced more aggressively. But it is still going to lose on iOS and not because Motorola will get out marketed but because of the fact that the Apple Watch (even after WatchOS 2 comes out) is primarily a peripheral for the iPhone-- one that places a differently sized display in a more convenient location for certain tasks and which brings a microphone and a health sensor to the wrist. Most of the magic will continue to come from the guts of the smartphone for quite some time: connectivity to the rest of the world and dedicated processing to do the really interesting stuff.

More to the point, as the owner of the platform Apple can just continue denying deep integration to third parties who make hardware (see what they did to the Pebble Time) at least until they determine that the watch is not strategic (which I highly doubt) in which case they might choose to treat it as a truly dumb peripheral.

This kind of platform power got Microsoft in trouble in the late 1990s but it doesn't seem to be even on the radar of regulators today (with the exception of content businesses like the book publishers). One day it may be but for now if I were in the "stuff on the wrist" business, I'd either focus on amazingly compelling (and differentiated) functionality or Android (not that the problem won't exist there as well) rather than trying to fit within the Apple walled garden.

Mobile feels like eveything that matters, right?

My favorite mobile analyst does a good job of putting data to a sentiment I've heard expressed a lot lately: that there is no Internet vs. mobile Internet but rather that the mobile Internet has become The Internet for all intents and purposes.

I think this has now become non controversial enough to the point where there may be dividends in questioning it. And while I would never argue that from an access device perspective or from a total number of cumulative hours spent on the Internet, the statement is absolutely correct. However the data that I haven't seen yet is whether productive work is getting done on mobile first interfaces. That term is slippery and I am sure for every document-centric usecase I come up with there is an equivalent one with a third world farmer trading beans with mobile banking and price transparency but I do wonder why there is still so much getting done on PCs (desktops and laptops) even when the "apps" are really web applications that have encapsulated a prior document usecase.

I am sure eventually everything will go mobile only but I do wonder how much time there will be and what kind of input peripheral innovation (outside of voice) we still have to see before getting there.

No one wants to miss the next "I" platform

It was almost exactly eight years ago that Apple detonated a nuclear bomb inside several industries by launching the first mass market smartphone. At the time, it was remarkable to see for how long other companies ridiculed the computer maker's entry into telecom; I distinctly remember being in Waterloo visiting RIM a year after its introduction and having a high level executive tell me that he couldn't wait for this "touch screen fad to blow over" because he was sure it would. Indeed of the established vendors, none thought this was a serious threat for at least a couple of years, and only Android (not yet released and bought by Google for its people) understood how important it was to pivot quickly to the flat glass form factor.

My how times change. It hasn't even been a full year of the smartwatch as a potential platform (which I'd call as starting with the Apple Watch this summer) and already the big guys are on to more compelling second versions and even "non traditional" players are getting into the mix. What is amazing in this category is that no one is sure that the smartwatch will even be a big platform and yet companies are investing.

And a little closer to home, in VR we went from being laughed at in 2013 to at least three big companies chasing Oculus/Facebook's tail light within a year (Sony, HTC-Valve, Microsoft). It certainly feels like the time to jump on the platform bandwagon has decreased, in both of these cases well before the platform is considered to have gained critical mass.

Does this make sense? Absolutely, especially if the next one of these platforms that hits is even a tenth as successful as the smartphone. Quite likely though what will happen is that neither will, nor will any of the other half dozen big bets being made by companies large and small today, at least not certainly within the timeframe of eight years. And when people have forgotten the lesson of how big something can get when all of the ingredients are just right (after all smartphones were attempted long before 2007), something will break through and delight, amaze, and surprise us yet again!

Are we glorifying auto mechanics?

Is programming this era's equivalent of fixing a car or is it more fundamental to the future of a wide array of work?

Over the past few days I started by making the statement that casual coders were a great phenomenon, then looked at kids/beginners, people who don't program but program for a living and finally the changes in the working programmer's daily life.

But this morning I remembered a recent conversation with a friend over coding bootcamps where he was asking where all of the sudden we started thinking that "everyone should code" was actually a worthwhile pursuit. His argument was that this was similar to claiming "everyone should be able to repair their own car" which seems like a silly idea on the face of it given how few people even know how cars work, let alone how to fix them. In thinking a little more deeply about it, the notion that people should have a basic understanding of how an engine (or motor) creates kinetic energy that can be put to good use is not a terrible idea. Ergo why physics is a required part of the high school curriculum.

But in my view, software is even more important in the role that it will continue to play in the world for many decades to come. Even long after computers are writing 98% of the software we depend on, a basic understanding of how procedural logic solves a problem will keep people in an important part of the hierarchy.

I've been reading John Markoff's excellent Machines of Loving Grace, a super interesting history of the difference in philosophy between the AI researches and the IA ("Intelligence Augmentation") ones over the past few decades. Throughout the book Markoff makes the claim that people who create tools shape these tools with their own values. To that end, software may end up being a much more pliable tool to imbue with values than the engine, the airplane, or even the written word itself.

This feels like the real reason why we should be glorifying the mechanics of tomorrow.

Professional coders and composing with lego bricks

To finish noodling on the casual coder, I think I'll end with the professional programmer. Back in they day this person's job was writing really tight code primarily around the limitations of Moore's Law to make an underpowered device do something that felt like magic. But a lot of doublings have taken place since Gordon Moore's original observation in 1965 and we now find ourselves wasting CPU cycles in the name of user experience and programmer productivity.

Most of this has come in the form of higher level languages (Python, Ruby, Javascript) but there is another equally if not more important side effect of not having to be as tightly coupled: the value of open source as the Lego bricks that working programmers can use to compose their way to a solution. Need functionality for JPEG metadata parsing? There are over a dozen projects on Github that will get you there. Same for servers, protocols, databases, and just about anything else you might need.

It is hard to call a working programmer a casual coder— after all these people are getting paid to get work done by making computers do things. But what is similar is that in the era of composable software many more people who could not have otherwise been professional programmers now can, thus making the pool of talent much larger. It is not casual but it is inclusive which means in effect that software will be able to creep into smaller and smaller nooks and crannies as it helps to connect the world and drive huge productivity gains into the future.

Long live the casual coder!

The intermediate casual coders help software eat the world

The belly of the beast when it comes to the casual coder lives in the people whose jobs are about something other than writing software but who increasingly need software in order to handle either the scale or scope of what they have to get done.

One really interesting phenotype of this intermediate coder is the scientist (see here for an example of a prototypical one) who need to process data at a scale that precludes manual workflows. They do the work in the name of discovering new things so the code itself is no more important than the beakers a chemist would use to mix compound— that is, it needs to work and just not get in the way.

Another example worth mentioning is the quant investor who trades either partially or fully algorithmically based on strategies that a program can execute much more efficiently than human traders (see what my friends at Quantopian are doing to democratize access to this type of intermediate coding). In the case of good algorithmic trading, the scope of what can be considered by a program as opposed to a human weighing stocks and derivatives means that the edge is with the machines.

When people talk about software eating the world it is the intermediate coders that I think of first: slowly and unwittingly writing what they think of as disposable software with little fanfare that ends up making the gears of the world turn.

Don't pander to the beginners : on contrived tools

Any exploration of casual coders should probably start at the beginning with kids trying to learn to program. Having two of my own, I've been through a lot of tools in search of something that holds their attention— from Lego Mindstorms to Scratch to Minecraft Mods, I've tried almost everything. And in most cases, the tool results in one of two failure modes: either it is simply too contrived to hold the attention of the learner beyond say, the amount of time that it takes to learn the rules of a moderately complex board game, or it is too simplistic to compete with the existing alternatives when it comes to stuff that is built by the pros.

This latter point is often lost on people seeking to make computer science education more accessible. Mucking around with Scratch on a Raspberry Pi that is gasping to drive a real computer display doesn't even hold a candle to a Flash game in an ancient web browser to say nothing of the GPU enhanced stuff most kids are exposed to in today's mobile devices. And no matter how relevant the style of pedagogy, it's unlikely to compete due simply to the amount of folks throwing resources at the mainstream platforms.

This is where the emerging platforms can enter the picture in interesting ways. Physical computing (the "Internet of Things") is one of those edges— you can't buy a talking alarm clock that blinks the lights in your room from the AppStore, nor would any sane CE company make one for Best Buy but it is fairly accessible to build one. Another possible edge might actually still exist on the open web where the scraping and remixing of existing information sources can engender some of the same feelings around control of one's environment.

The rewards that come from control and mastery are way more important than the pedagogy embodied in the tools when it comes to moving up the ramp of casual coding; as such, educators would do well to start there.

Programability and the Casual Coder

One notion I love to think through is that of the Casual Coder, or the person who in the course of using Excel or doing data analysis or trying to wire up an IoT device ends up programming without realizing it. It feels like the very best on-ramp to the complex world of everything connected to everything that we are heading into.

Or put another way, the complexity of the world is rising faster than our ability to cope with it through a series of chiclets on the home screen/springboard of our smartphones and we'll either have find it in ourselves to program the world to our own ends or end up being programmed by the stacks looking to trade our eyeballs and personal information to advertisers for some short term convenience, all the while becoming monkeys shipping and tapping our glass surfaces in the name of useful.

This strikes me as a good theme to explore over the next few days while we close out another summer.

The Raspberry Pi is the mortar for the Internet of Things

A couple of years ago I went to see Eben Upton (the founder of the Raspberry Pi foundation) talk about the project. At that time, he had just passed the 2M mark in terms of cumulative units shipped and I remember joking during the Q&A that the killer app for the little computer was likely "gathering dust in a desk drawer."

The thing I didn't realize back then (as is often the case with really useful "glue" technologies like low cost computer that can connect anything to anything) was how important the RPI and all of its clones would become to entrepreneurs and hobbyists alike trying to wire up the physical world. I've seen it go into 3d printers and Internet connected thermometers. I've used it to build a weather-aware Christmas tree in what I found to be one of the most engaging projects I've done with my kids. And inspiring this post, this morning while wrestling with a solution to remote print to my Makerbot, I found OctoPi, a customized Raspberry Pi disk image that turns a Pi into a 3d printer server which allows for job pooling, cloud slicing, and even a video feed.

In short, the RPI feels to me like the mortar of the Internet of Things, connecting all sorts of things to the network in a cost effective and rapid prototyping way that lets people find interesting use cases before having to sit down to lay a board down with a micro controller and a bunch of radios.

[And on that last point: there are a number of other alternatives that can be used to prototype Internet connecting things: the BeagleBone Black, the Intel Edison, the Arduino (which has a very rich ecosystem), and even the new belle of the ball, the ESP8266 (basically a very low cost ARM core bolted to a WIFI chip, <$10 in lots of 1). I've played with all of these and they each have their merits but what they lack is the same rich ecosystem that has sprung up around the Pi (except Arduino which unfortunately is not as general purpose or as connected as the Pi)]

The dark (but awesome) show Mr. Robot even features a Pi as a main character in its first season— I can't quite sing praises at that level but this here is my little ode that cheap little underpowered bit of mortar.

The Importance of the channel

I was recently shopping for a new grill and given the love I've developed for smoking meant, it was a different kind of grill from what I've bought in the past (a Big Green Egg). I did a bunch of online research as I've been trained to do over the last decade and a half of buying things and found that I had a number of questions that I couldn't quite get answered by marketing materials, forum posts, or tweets.

So I went to the channel— by which I mean dealers that sell the grill I wanted along with others and who could talk to me about pluses and minuses while they showed me the bells and whistles of each option. It was a great experience; engaging, entertaining, and most of all quite useful in the purchase I ended up making.

This type of guided (or dare I say, curated) commerce has been lost in the gutting of bricks and mortar outfits in favor of easy price discovery and the convenience of buying things online. To be fair, most retailers lost this battle way before the Internet put them six feet under by choosing to employ minimum wage human drones to push crap on commission with little knowledge or passion about what they were selling (just try a Best Buy or a City Sports for that).

But I wonder if the channel isn't going to make a comeback as we tire of the limited experience which represents the "state of the art" in e-commerce today. E-commerce seems stuck at 10% of total retail sales which I used to think of as the goal of every large retailer with a multi-channel presence but it may about be the need for help and guidance instead.

My 10m watch bet

At a board meeting recently I bet a fellow board member that Apple could easily ship 10M watches this year and according to IDC data today, it looks like I am well on the way to doing so by shipping 3.6 million units in Q2 alone. This is crazy for a product that has 25 "little use" use cases and no killer one that it can hang its hat on, especially in what is essentially its first real quarter of shipping. Add to the equation how half done all of the first party apps were and how badly the API hobbled all of the third party apps and it becomes an even bigger head scratcher.

I'm not sure we'll know the exact reasons for the early success of the Watch for a while (or even if we can call it early success) but presuming for a moment that it will get to the 10M mark and soon eclipse Fitbit as the most popular "wearable" in terms of unit sales for the sake of understanding a key point: as these devices go from being technology products to fashion accessories that happen to be convenient, we may find ourselves increasingly in the land of "little use" use cases where it is the sum total of them that convince people to adopt rather than the one big use case to rule them all. I suspect wading into existing categories that are thriving on the basis of taste and utility will only exacerbate this both for Apple and its competitors. Which is interesting to think about now that all of the big tech companies have set their sites on the car as the next "big" market, especially if the first versions of only partially autonomous and thus have to win through a combination of these little use use cases.

The Strategic WIFI Access Point

Last week's announcement of the Google's OnHub wireless router may seem silly— after all, what is a company who is fighting the spectre of privacy violations doing launching a super premium wifi router ($200) into a commodity market?

This is obviously all about getting people to consume more and better Internet. Given the advantage that Google has so long as the platform of choice remains the Web (and not native apps where flakey wireless can be managed more gracefully by the app developer), this notion of controlling the "last 50 feet" and delivering a great experience starts to make sense.

On top of it, the Access Point (AP) is super strategic for all sorts of new applications that run the gamut from IoT devices without interfaces to intelligent WIFI offload of what might traditionally be considered the realm of cellular traffic. The company's Project Fi already points to the ambitions they have there of becoming a sort of "super MVNO (Mobile Virtual Network Operator)" where they can deliver faster and more reliable Internet service than typical smartphone plans at lower prices.

I'm not sure at $200 this will gain meaningful traction, especially as other companies fighting for the home (the broadband companies and Apple) come to realize how strategic the home AP and start subsidizing it can be but if it can be brought down in price over time (while delivering a 10X better experince for people fighting flakey cheap APs being increasingly overwhelmed) it is definitely worth the shot.

Technological Optimism

In the land of massive generalizations, there are two ways to look at the current crop of startups: with technological optimism or through the filter of business model incrementalism:

  • In the former, an employee/investor/customer looks at the project and thinks: "does this fit in my wildly optimistic science fiction infused model of the world?" or more simply: "could this be part of the future I want?" Generally speaking it is a geek ethos that is developed through tinkering and careful study of scifi cannons like the novels of Neil Stephenson and Star Trek TNG. A dose of determinism is broadly applied as "technology will make all things better in time." Most interestingly, it is this mode of looking at the world that often results in people who see what most do not when the venture is young and looks small and irrelevant.

  • Business model incrementalism is not quite the opposite. Rather it is rooted in a very mechanical form of "pattern matching." It goes something like this: "I've seen this work in transportation; therefore with any suitably fragmented supply base where the underlying asset is rarely fully utilized, it should work. Uber for cement mixers FTW!" I am convinced that business model optimism is steeped in American Business education and empowered by the twin forces of being told over and over that you can outthink everyone else through proper analysis and the certainty and ease with which one can recalcuate rows in a spreadsheet. Performed at a high level and by a very intelligent person it can be quite helpful but in most cases, it is a potato peeler in the hands of a two year old.

A few weeks ago, I was reminded of both of these modes of thinking while thoroughly enjoying Season 1 of Breaking Smart by Venkat Rao, an ode to technological optimism that is a breath of fresh air at a time when most conversations are so thoroughly centered on business model incrementalism (in fact, I can't stop recommending particular essays to people at the end of meetings, always a good sign). Not all of the pieces will have the broad appeal but they do mix technology, economics, cultural analysis, and history in a wonderful ice cream swirly of technological optimism that is worth consuming in as few sittings as possible, particularly amidst the recent macro stutters that are being interpreted by some as a meltdown.

Thank you Venkat!

Cheap laptops: the end of the road for the DynaBook

This NYT piece has some great data on just how much Chromebooks have destroyed the iPad in the education sector. At a first approximation analysts believe this is simply about price: Apple is the premium vendor and Chromebooks are at the other end, hence making bulk purchases much more palatable for school districts.

But there is something deeper here, a fundamental overshooting of the basic requirements for productive computing that is ripping through the industry in the form of prices that are only going to keep coming down. The laptop form factor is 47 years old, going all the way back to a concept put on paper by Alan Kay at Xerox PARC that described most of the features which the industry would rush to perfect over the next five decades (there is even an excellent book on the development of the Thinkpad, one of the most iconic laptops around that details some of these early feats of awesome engineering before Moore's Law and battery technology was ready to comply.

But buying a premium laptop today is an exercise in burning money along a silly dimension (weight, power, finish) due to how good most screens, keyboards, mobile CPUs and batteries have become (exhibit A is this Macbook Retina I am typing on now— beautiful but utterly unnecessary especially considering the tradeoffs of its svelte form).

Hence the tablet, the industry's attempt to open up the market for a new type of device. However, no real software innovation in the productivity space combined with a long replacement cycle seems to have put out the fire before it could really catch on.

Meanwhile vendors empowered by Google's simplistic OS are likely to keep eating away at the general form factor, something which will be bad for Apple, Microsoft, and anyone else who hoped to hold on to that market.

Finally it bears mentioning that even the impressive growth of the Chromebooks is merely a case of fleas dancing on the butt of the elephant that is the smartphone.

Big Data can be Small Data and Useful

This article in Computerworld featuring a tour of the predictive analytics behind the Proactive in iOS9 (the Siri AI improvements) does two things at once: it points at some pretty useful features and also showcases how little "AI" a program really needs if it has high enough fidelity on the data collected. For instance, how hard is it to use your personal mail repository to grep for the telephone numbers of incoming calls? Or "predict" what you might need given a repeating calendar activity (gym/office/home)?

Of course all of these activities and more will benefit from machine learning eventually but we are so early in the development of personal agents that simple heuristics will carry the day for a long time still.

Which brings me the ultimate fallacy of "big data" as justification for the predominant consumer Internet business model. Companies like Google and Facebook rationalize the collection of vast amounts of personal data in the name of user experience, arguing that the more data they have, the better the experience gets. And while this is true for some domains (speech recognition for instance), there are many more areas, it is not true for many others— or at least won't be until we are far into strip mining the tricks coming in iOS9.

Tim Wu's recent New Yorker piece arguing that Facebook should pay users for the right to collect their data may point to a growing sensitivity on the part of mainstream users. So hopefully others will unpack the big data fallacy when it comes to "improvements" by asking: yeah, but improvements for whom?

Any new sufficient advanced technology is indistinguishable from magic

This was the Arthur Clarke's quote way back in the last century. And sadly, "magic" like "wow" and "awesome" is a word that is so often bastardized in our industry that we become inured to it. But still:

My father is almost 78 years old and lost his wife of 50 years just ten months ago. He's from a generation that expresses loneliness in a horribly repressed way (think toothpaste tube squeezing its last bit into misshapen lumps along its surface). Today, in an classically uncharacteristic move, he called an Uber and showed up 75 miles and 100 minutes after Facetiming me to spend the night by the water. It was a surprise and it is wonderful.

When I think of the "on demand economy" or the "O2O (offline to online) businesses" I think of privileged hipsters living in the urban equivalent of an assisted living community for the young. It is no longer driving but ubering to the "it" party. It is picking between gluten free alternatives for takeout.

Which is why it is so wonderful to see how this information revolution at the boundary of the network and the real world can help people who need it most in the moments when it matters.

About half a billion people that is. With another seven and a half to go, we've got our work cut out for us.

The Swiss Armyfication of wearables

I liked the analogy in this thoughtful piece on dedicated fitness trackers versus smartwatches because it rings true: the former are pocket knives (single function) and the latter are swiss army knives (multi-function, taking up the same amount of room). Eventually for most people the latter will win. It's a tough statement to swallow when Fitbit is a public company worth ~$8.5B (as of today) and when even Apple is expected to disappoint with the sales numbers on its first smartwatch but ultimately I think it is correct for three reasons, all of which apply to wearables of all shapes and sizes:

  • real estate: there are only two wrists on the body, and while some of my brethren have taken to wearing a smartwatch and a fitness tracker (you know who you are), this Lynda Carter look is unlikely to attract even an early adopter following. So as a dedicated device vendor, you are fighting for limited premium real estate.
  • connectivity: I ran with an iPod for years past the launch of the iPhone and only gave it up when I realized that the combination of streaming music and podcasts meant that I needed a connected device. And though fitness trackers today employ the smartphone as backhaul pattern in the same way that the smartwatches do, BTLE is still finicky enough that regular user intervention is required, something which may or may not happen if that backhaul is not shared across a variety of valuable use cases, to say nothing of the advantage that the platform owner enjoys in maintaining that BTLE link (canvas Pebble versus Watch users on iOS to see this in action).
  • software: this is the sneaky lever (also covered in the aforementioned piece). Killer apps will drive the platform and it is clear that this is going to be a game of third party innovation (just try the first party Apple fitness apps if you are unsure of why). Now this claim is also hard to swallow because how crappy the 1.0 Watch API was — a fact that should get the man in charge of it promoted to the iPod group, but it will happen. Watch 2.0 looks like a much richer way for developers to write apps and it will be this app explosion— more than either of the two other reasons— that will ultimately kill the pocket knife in this particular domain.

Software does seem to be the world's greatest lever in almost any battle these days.

On Bits and bits

I started blogging back in 2004 when I was struggling to hire software engineers in New York during the first post dotcom ice age. Hiring then was not a struggle like today's in that the job market wasn't crazy; instead hiring was all about the "fit challenge." Most great engineers in NYC back in 2004 wanted to work at investment banks and hedge funds and these aspirations did not mix well with the consumer Internet zeitgeist we were trying to reignite. So I started a blog so that when they inevitably googled me and/or the job, they'd come across a first filter that didn't cost me a call or meeting.

As a recruiting aid, it worked so well in fact that I kept doing it across another startup and right through that startup's life and into my brief tenure at HP. And in the process something funny happened: I had real fun doing it (turns out that "thinking in writing" really does work!).

And then I got into the venture business where most (though not all) people blog as marketing along well-trodden paths of "valuable content" that breaks down roughly into:

  • financing advice
  • management advice
  • life advice

Some of this content, when well researched and not generalized from one's narrow (and in the case of entrepreneurs-turned-VCs, rapidly fading) point of view can be incredibly valuable. In my opinion, the spread of knowledge around best practices that comes from the very best of these types of blog posts has been as much of a factor as "cloud computing" or "open source" in flattening the entry ramp to the startup life.

And some of the authors have a knack for this. Alas this has not been me.

And so blogging turned from something that was fun to a chore and from something that was off the cuff and about interesting things I was seeing to tons of work in search of "insight" and "value" I could deliver to an amorphously defined audience being deluged with tips, tricks, and the rare gem from Venturelandia.

The result: after almost a decade of doing it regularly, I sort of got tired and petered out. But I've missed it, or at least a part of it.

Ergo bits. This started as a project to rewrite my old blogging software (which I had last overhauled in 2007) a couple of weeks ago but has become an experiment to see if I can rekindle what I used to like about thinking in writing. Hence the simplistic format, the lack of buttons to share, tweet or comment. And most importantly, the freedom from the pressure that comes with having to write long and "valuable" posts.

These pieces here, they'll be just little bits.